I0524 23:39:07.718900 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0524 23:39:07.719061 7 e2e.go:129] Starting e2e run "527e4af2-4a80-4010-89b2-297a700173c5" on Ginkgo node 1 {"msg":"Test Suite starting","total":288,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1590363546 - Will randomize all specs Will run 288 of 5095 specs May 24 23:39:07.768: INFO: >>> kubeConfig: /root/.kube/config May 24 23:39:07.773: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 24 23:39:07.795: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 24 23:39:07.844: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 24 23:39:07.844: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 24 23:39:07.844: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 24 23:39:07.880: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 24 23:39:07.880: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 24 23:39:07.880: INFO: e2e test version: v1.19.0-alpha.3.35+3416442e4b7eeb May 24 23:39:07.881: INFO: kube-apiserver version: v1.18.2 May 24 23:39:07.881: INFO: >>> kubeConfig: /root/.kube/config May 24 23:39:07.885: INFO: Cluster IP family: ipv4 SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:39:07.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services May 24 23:39:07.958: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-389 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-389 STEP: creating replication controller externalsvc in namespace services-389 I0524 23:39:08.232815 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-389, replica count: 2 I0524 23:39:11.283356 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 23:39:14.283663 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 24 23:39:14.353: INFO: Creating new exec pod May 24 23:39:18.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-389 execpodql8ht -- /bin/sh -x -c nslookup nodeport-service' May 24 23:39:21.461: INFO: stderr: "I0524 23:39:21.201702 31 log.go:172] (0xc0007c86e0) (0xc0006f0fa0) Create stream\nI0524 23:39:21.201777 31 log.go:172] (0xc0007c86e0) (0xc0006f0fa0) Stream added, broadcasting: 1\nI0524 23:39:21.218248 31 log.go:172] (0xc0007c86e0) Reply frame received for 1\nI0524 23:39:21.218294 31 log.go:172] (0xc0007c86e0) (0xc0006d2d20) Create stream\nI0524 23:39:21.218304 31 log.go:172] (0xc0007c86e0) (0xc0006d2d20) Stream added, broadcasting: 3\nI0524 23:39:21.219166 31 log.go:172] (0xc0007c86e0) Reply frame received for 3\nI0524 23:39:21.219202 31 log.go:172] (0xc0007c86e0) (0xc0006ba5a0) Create stream\nI0524 23:39:21.219215 31 log.go:172] (0xc0007c86e0) (0xc0006ba5a0) Stream added, broadcasting: 5\nI0524 23:39:21.220044 31 log.go:172] (0xc0007c86e0) Reply frame received for 5\nI0524 23:39:21.330846 31 log.go:172] (0xc0007c86e0) Data frame received for 5\nI0524 23:39:21.330870 31 log.go:172] (0xc0006ba5a0) (5) Data frame handling\nI0524 23:39:21.330880 31 log.go:172] (0xc0006ba5a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0524 23:39:21.452178 31 log.go:172] (0xc0007c86e0) Data frame received for 3\nI0524 23:39:21.452224 31 log.go:172] (0xc0006d2d20) (3) Data frame handling\nI0524 23:39:21.452255 31 log.go:172] (0xc0006d2d20) (3) Data frame sent\nI0524 23:39:21.453282 31 log.go:172] (0xc0007c86e0) Data frame received for 3\nI0524 23:39:21.453310 31 log.go:172] (0xc0006d2d20) (3) Data frame handling\nI0524 23:39:21.453336 31 log.go:172] (0xc0006d2d20) (3) Data frame sent\nI0524 23:39:21.453977 31 log.go:172] (0xc0007c86e0) Data frame received for 3\nI0524 23:39:21.454033 31 log.go:172] (0xc0006d2d20) (3) Data frame handling\nI0524 23:39:21.454063 31 log.go:172] (0xc0007c86e0) Data frame received for 5\nI0524 23:39:21.454077 31 log.go:172] (0xc0006ba5a0) (5) Data frame handling\nI0524 23:39:21.456104 31 log.go:172] (0xc0007c86e0) Data frame received for 1\nI0524 23:39:21.456135 31 log.go:172] (0xc0006f0fa0) (1) Data frame handling\nI0524 23:39:21.456171 31 log.go:172] (0xc0006f0fa0) (1) Data frame sent\nI0524 23:39:21.456229 31 log.go:172] (0xc0007c86e0) (0xc0006f0fa0) Stream removed, broadcasting: 1\nI0524 23:39:21.456483 31 log.go:172] (0xc0007c86e0) (0xc0006f0fa0) Stream removed, broadcasting: 1\nI0524 23:39:21.456495 31 log.go:172] (0xc0007c86e0) (0xc0006d2d20) Stream removed, broadcasting: 3\nI0524 23:39:21.456613 31 log.go:172] (0xc0007c86e0) (0xc0006ba5a0) Stream removed, broadcasting: 5\n" May 24 23:39:21.462: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-389.svc.cluster.local\tcanonical name = externalsvc.services-389.svc.cluster.local.\nName:\texternalsvc.services-389.svc.cluster.local\nAddress: 10.106.218.87\n\n" STEP: deleting ReplicationController externalsvc in namespace services-389, will wait for the garbage collector to delete the pods May 24 23:39:21.559: INFO: Deleting ReplicationController externalsvc took: 43.419166ms May 24 23:39:21.859: INFO: Terminating ReplicationController externalsvc pods took: 300.345556ms May 24 23:39:35.308: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:39:35.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-389" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:27.514 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":288,"completed":1,"skipped":10,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:39:35.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-df37f7ce-6495-47a2-8d51-310c3c3c13bf STEP: Creating a pod to test consume secrets May 24 23:39:35.556: INFO: Waiting up to 5m0s for pod "pod-secrets-2f59662b-2e69-4059-bb7c-0326c64a84e1" in namespace "secrets-6567" to be "Succeeded or Failed" May 24 23:39:35.575: INFO: Pod "pod-secrets-2f59662b-2e69-4059-bb7c-0326c64a84e1": Phase="Pending", Reason="", readiness=false. Elapsed: 19.285604ms May 24 23:39:37.659: INFO: Pod "pod-secrets-2f59662b-2e69-4059-bb7c-0326c64a84e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102594103s May 24 23:39:39.754: INFO: Pod "pod-secrets-2f59662b-2e69-4059-bb7c-0326c64a84e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.197949054s STEP: Saw pod success May 24 23:39:39.754: INFO: Pod "pod-secrets-2f59662b-2e69-4059-bb7c-0326c64a84e1" satisfied condition "Succeeded or Failed" May 24 23:39:39.757: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-2f59662b-2e69-4059-bb7c-0326c64a84e1 container secret-volume-test: STEP: delete the pod May 24 23:39:39.809: INFO: Waiting for pod pod-secrets-2f59662b-2e69-4059-bb7c-0326c64a84e1 to disappear May 24 23:39:39.999: INFO: Pod pod-secrets-2f59662b-2e69-4059-bb7c-0326c64a84e1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:39:39.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6567" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":2,"skipped":18,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:39:40.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 23:39:40.582: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 23:39:42.592: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725960380, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725960380, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725960380, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725960380, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 23:39:45.633: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:39:46.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3895" for this suite. STEP: Destroying namespace "webhook-3895-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.407 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":288,"completed":3,"skipped":23,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:39:46.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:40:02.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-210" for this suite. • [SLOW TEST:16.156 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":288,"completed":4,"skipped":31,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:40:02.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 24 23:40:02.742: INFO: namespace kubectl-7903 May 24 23:40:02.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7903' May 24 23:40:03.096: INFO: stderr: "" May 24 23:40:03.096: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 24 23:40:04.101: INFO: Selector matched 1 pods for map[app:agnhost] May 24 23:40:04.101: INFO: Found 0 / 1 May 24 23:40:05.100: INFO: Selector matched 1 pods for map[app:agnhost] May 24 23:40:05.100: INFO: Found 0 / 1 May 24 23:40:06.100: INFO: Selector matched 1 pods for map[app:agnhost] May 24 23:40:06.100: INFO: Found 0 / 1 May 24 23:40:07.101: INFO: Selector matched 1 pods for map[app:agnhost] May 24 23:40:07.101: INFO: Found 1 / 1 May 24 23:40:07.101: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 24 23:40:07.104: INFO: Selector matched 1 pods for map[app:agnhost] May 24 23:40:07.104: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 24 23:40:07.104: INFO: wait on agnhost-master startup in kubectl-7903 May 24 23:40:07.104: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-6v255 agnhost-master --namespace=kubectl-7903' May 24 23:40:07.229: INFO: stderr: "" May 24 23:40:07.229: INFO: stdout: "Paused\n" STEP: exposing RC May 24 23:40:07.229: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7903' May 24 23:40:07.428: INFO: stderr: "" May 24 23:40:07.428: INFO: stdout: "service/rm2 exposed\n" May 24 23:40:07.461: INFO: Service rm2 in namespace kubectl-7903 found. STEP: exposing service May 24 23:40:09.467: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7903' May 24 23:40:09.618: INFO: stderr: "" May 24 23:40:09.618: INFO: stdout: "service/rm3 exposed\n" May 24 23:40:09.629: INFO: Service rm3 in namespace kubectl-7903 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:40:11.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7903" for this suite. • [SLOW TEST:9.039 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":288,"completed":5,"skipped":60,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:40:11.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:40:46.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3531" for this suite. • [SLOW TEST:34.690 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":288,"completed":6,"skipped":64,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:40:46.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-3113c585-1152-4267-bb04-03767fb8ad22 STEP: Creating a pod to test consume configMaps May 24 23:40:46.466: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-48672098-f59f-48ac-8653-158df26cd2f0" in namespace "projected-5924" to be "Succeeded or Failed" May 24 23:40:46.520: INFO: Pod "pod-projected-configmaps-48672098-f59f-48ac-8653-158df26cd2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 53.602294ms May 24 23:40:48.524: INFO: Pod "pod-projected-configmaps-48672098-f59f-48ac-8653-158df26cd2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057940712s May 24 23:40:50.528: INFO: Pod "pod-projected-configmaps-48672098-f59f-48ac-8653-158df26cd2f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062110524s STEP: Saw pod success May 24 23:40:50.528: INFO: Pod "pod-projected-configmaps-48672098-f59f-48ac-8653-158df26cd2f0" satisfied condition "Succeeded or Failed" May 24 23:40:50.531: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-48672098-f59f-48ac-8653-158df26cd2f0 container projected-configmap-volume-test: STEP: delete the pod May 24 23:40:50.573: INFO: Waiting for pod pod-projected-configmaps-48672098-f59f-48ac-8653-158df26cd2f0 to disappear May 24 23:40:50.590: INFO: Pod pod-projected-configmaps-48672098-f59f-48ac-8653-158df26cd2f0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:40:50.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5924" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":7,"skipped":65,"failed":0} ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:40:50.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-8599e98e-c713-48b2-8c82-920af8c7f89c STEP: Creating a pod to test consume secrets May 24 23:40:50.742: INFO: Waiting up to 5m0s for pod "pod-secrets-b56a1daf-355f-47de-a25e-cc33dcec27da" in namespace "secrets-8770" to be "Succeeded or Failed" May 24 23:40:50.771: INFO: Pod "pod-secrets-b56a1daf-355f-47de-a25e-cc33dcec27da": Phase="Pending", Reason="", readiness=false. Elapsed: 28.811977ms May 24 23:40:52.886: INFO: Pod "pod-secrets-b56a1daf-355f-47de-a25e-cc33dcec27da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144705797s May 24 23:40:54.910: INFO: Pod "pod-secrets-b56a1daf-355f-47de-a25e-cc33dcec27da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.168380748s STEP: Saw pod success May 24 23:40:54.910: INFO: Pod "pod-secrets-b56a1daf-355f-47de-a25e-cc33dcec27da" satisfied condition "Succeeded or Failed" May 24 23:40:54.913: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-b56a1daf-355f-47de-a25e-cc33dcec27da container secret-volume-test: STEP: delete the pod May 24 23:40:54.958: INFO: Waiting for pod pod-secrets-b56a1daf-355f-47de-a25e-cc33dcec27da to disappear May 24 23:40:54.996: INFO: Pod pod-secrets-b56a1daf-355f-47de-a25e-cc33dcec27da no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:40:54.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8770" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":8,"skipped":65,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:40:55.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server May 24 23:40:55.113: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:40:55.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-307" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":288,"completed":9,"skipped":69,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:40:55.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 24 23:40:55.355: INFO: Waiting up to 5m0s for pod "pod-2f5d22cd-ddc2-4294-bcae-963950b4ed2d" in namespace "emptydir-9942" to be "Succeeded or Failed" May 24 23:40:55.367: INFO: Pod "pod-2f5d22cd-ddc2-4294-bcae-963950b4ed2d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.168213ms May 24 23:40:57.371: INFO: Pod "pod-2f5d22cd-ddc2-4294-bcae-963950b4ed2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015760198s May 24 23:40:59.375: INFO: Pod "pod-2f5d22cd-ddc2-4294-bcae-963950b4ed2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019860073s STEP: Saw pod success May 24 23:40:59.375: INFO: Pod "pod-2f5d22cd-ddc2-4294-bcae-963950b4ed2d" satisfied condition "Succeeded or Failed" May 24 23:40:59.377: INFO: Trying to get logs from node latest-worker pod pod-2f5d22cd-ddc2-4294-bcae-963950b4ed2d container test-container: STEP: delete the pod May 24 23:40:59.601: INFO: Waiting for pod pod-2f5d22cd-ddc2-4294-bcae-963950b4ed2d to disappear May 24 23:40:59.674: INFO: Pod pod-2f5d22cd-ddc2-4294-bcae-963950b4ed2d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:40:59.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9942" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":10,"skipped":102,"failed":0} ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:40:59.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-wzfp STEP: Creating a pod to test atomic-volume-subpath May 24 23:40:59.832: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-wzfp" in namespace "subpath-6035" to be "Succeeded or Failed" May 24 23:40:59.893: INFO: Pod "pod-subpath-test-secret-wzfp": Phase="Pending", Reason="", readiness=false. Elapsed: 61.062155ms May 24 23:41:01.897: INFO: Pod "pod-subpath-test-secret-wzfp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064614804s May 24 23:41:03.901: INFO: Pod "pod-subpath-test-secret-wzfp": Phase="Running", Reason="", readiness=true. Elapsed: 4.069130596s May 24 23:41:05.905: INFO: Pod "pod-subpath-test-secret-wzfp": Phase="Running", Reason="", readiness=true. Elapsed: 6.073522776s May 24 23:41:07.915: INFO: Pod "pod-subpath-test-secret-wzfp": Phase="Running", Reason="", readiness=true. Elapsed: 8.083096333s May 24 23:41:09.920: INFO: Pod "pod-subpath-test-secret-wzfp": Phase="Running", Reason="", readiness=true. Elapsed: 10.087664248s May 24 23:41:11.924: INFO: Pod "pod-subpath-test-secret-wzfp": Phase="Running", Reason="", readiness=true. Elapsed: 12.091952761s May 24 23:41:13.928: INFO: Pod "pod-subpath-test-secret-wzfp": Phase="Running", Reason="", readiness=true. Elapsed: 14.096399771s May 24 23:41:15.933: INFO: Pod "pod-subpath-test-secret-wzfp": Phase="Running", Reason="", readiness=true. Elapsed: 16.101101129s May 24 23:41:17.937: INFO: Pod "pod-subpath-test-secret-wzfp": Phase="Running", Reason="", readiness=true. Elapsed: 18.105214783s May 24 23:41:19.952: INFO: Pod "pod-subpath-test-secret-wzfp": Phase="Running", Reason="", readiness=true. Elapsed: 20.120498442s May 24 23:41:21.957: INFO: Pod "pod-subpath-test-secret-wzfp": Phase="Running", Reason="", readiness=true. Elapsed: 22.124566215s May 24 23:41:23.970: INFO: Pod "pod-subpath-test-secret-wzfp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.137899355s STEP: Saw pod success May 24 23:41:23.970: INFO: Pod "pod-subpath-test-secret-wzfp" satisfied condition "Succeeded or Failed" May 24 23:41:23.972: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-wzfp container test-container-subpath-secret-wzfp: STEP: delete the pod May 24 23:41:24.023: INFO: Waiting for pod pod-subpath-test-secret-wzfp to disappear May 24 23:41:24.039: INFO: Pod pod-subpath-test-secret-wzfp no longer exists STEP: Deleting pod pod-subpath-test-secret-wzfp May 24 23:41:24.039: INFO: Deleting pod "pod-subpath-test-secret-wzfp" in namespace "subpath-6035" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:41:24.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6035" for this suite. • [SLOW TEST:24.362 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":288,"completed":11,"skipped":102,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:41:24.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7067 STEP: creating service affinity-nodeport in namespace services-7067 STEP: creating replication controller affinity-nodeport in namespace services-7067 I0524 23:41:24.533873 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-7067, replica count: 3 I0524 23:41:27.584323 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 23:41:30.584602 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 23:41:30.595: INFO: Creating new exec pod May 24 23:41:35.655: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7067 execpod-affinity85xk5 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' May 24 23:41:35.950: INFO: stderr: "I0524 23:41:35.811376 172 log.go:172] (0xc000646b00) (0xc0002383c0) Create stream\nI0524 23:41:35.811434 172 log.go:172] (0xc000646b00) (0xc0002383c0) Stream added, broadcasting: 1\nI0524 23:41:35.814616 172 log.go:172] (0xc000646b00) Reply frame received for 1\nI0524 23:41:35.814641 172 log.go:172] (0xc000646b00) (0xc0006ac140) Create stream\nI0524 23:41:35.814649 172 log.go:172] (0xc000646b00) (0xc0006ac140) Stream added, broadcasting: 3\nI0524 23:41:35.815688 172 log.go:172] (0xc000646b00) Reply frame received for 3\nI0524 23:41:35.815733 172 log.go:172] (0xc000646b00) (0xc0000ddf40) Create stream\nI0524 23:41:35.815746 172 log.go:172] (0xc000646b00) (0xc0000ddf40) Stream added, broadcasting: 5\nI0524 23:41:35.816689 172 log.go:172] (0xc000646b00) Reply frame received for 5\nI0524 23:41:35.913010 172 log.go:172] (0xc000646b00) Data frame received for 5\nI0524 23:41:35.913033 172 log.go:172] (0xc0000ddf40) (5) Data frame handling\nI0524 23:41:35.913044 172 log.go:172] (0xc0000ddf40) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0524 23:41:35.942809 172 log.go:172] (0xc000646b00) Data frame received for 5\nI0524 23:41:35.942854 172 log.go:172] (0xc0000ddf40) (5) Data frame handling\nI0524 23:41:35.942876 172 log.go:172] (0xc0000ddf40) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0524 23:41:35.943128 172 log.go:172] (0xc000646b00) Data frame received for 5\nI0524 23:41:35.943148 172 log.go:172] (0xc0000ddf40) (5) Data frame handling\nI0524 23:41:35.943224 172 log.go:172] (0xc000646b00) Data frame received for 3\nI0524 23:41:35.943236 172 log.go:172] (0xc0006ac140) (3) Data frame handling\nI0524 23:41:35.945054 172 log.go:172] (0xc000646b00) Data frame received for 1\nI0524 23:41:35.945095 172 log.go:172] (0xc0002383c0) (1) Data frame handling\nI0524 23:41:35.945316 172 log.go:172] (0xc0002383c0) (1) Data frame sent\nI0524 23:41:35.945361 172 log.go:172] (0xc000646b00) (0xc0002383c0) Stream removed, broadcasting: 1\nI0524 23:41:35.945389 172 log.go:172] (0xc000646b00) Go away received\nI0524 23:41:35.945744 172 log.go:172] (0xc000646b00) (0xc0002383c0) Stream removed, broadcasting: 1\nI0524 23:41:35.945764 172 log.go:172] (0xc000646b00) (0xc0006ac140) Stream removed, broadcasting: 3\nI0524 23:41:35.945782 172 log.go:172] (0xc000646b00) (0xc0000ddf40) Stream removed, broadcasting: 5\n" May 24 23:41:35.950: INFO: stdout: "" May 24 23:41:35.951: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7067 execpod-affinity85xk5 -- /bin/sh -x -c nc -zv -t -w 2 10.96.89.27 80' May 24 23:41:36.170: INFO: stderr: "I0524 23:41:36.093402 193 log.go:172] (0xc000b61290) (0xc000aec780) Create stream\nI0524 23:41:36.093473 193 log.go:172] (0xc000b61290) (0xc000aec780) Stream added, broadcasting: 1\nI0524 23:41:36.098465 193 log.go:172] (0xc000b61290) Reply frame received for 1\nI0524 23:41:36.098498 193 log.go:172] (0xc000b61290) (0xc00050c320) Create stream\nI0524 23:41:36.098508 193 log.go:172] (0xc000b61290) (0xc00050c320) Stream added, broadcasting: 3\nI0524 23:41:36.099367 193 log.go:172] (0xc000b61290) Reply frame received for 3\nI0524 23:41:36.099413 193 log.go:172] (0xc000b61290) (0xc0004a6e60) Create stream\nI0524 23:41:36.099434 193 log.go:172] (0xc000b61290) (0xc0004a6e60) Stream added, broadcasting: 5\nI0524 23:41:36.100193 193 log.go:172] (0xc000b61290) Reply frame received for 5\nI0524 23:41:36.163254 193 log.go:172] (0xc000b61290) Data frame received for 5\nI0524 23:41:36.163284 193 log.go:172] (0xc0004a6e60) (5) Data frame handling\nI0524 23:41:36.163320 193 log.go:172] (0xc0004a6e60) (5) Data frame sent\nI0524 23:41:36.163345 193 log.go:172] (0xc000b61290) Data frame received for 5\nI0524 23:41:36.163366 193 log.go:172] (0xc0004a6e60) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.89.27 80\nConnection to 10.96.89.27 80 port [tcp/http] succeeded!\nI0524 23:41:36.163417 193 log.go:172] (0xc000b61290) Data frame received for 3\nI0524 23:41:36.163452 193 log.go:172] (0xc00050c320) (3) Data frame handling\nI0524 23:41:36.165078 193 log.go:172] (0xc000b61290) Data frame received for 1\nI0524 23:41:36.165293 193 log.go:172] (0xc000aec780) (1) Data frame handling\nI0524 23:41:36.165335 193 log.go:172] (0xc000aec780) (1) Data frame sent\nI0524 23:41:36.165362 193 log.go:172] (0xc000b61290) (0xc000aec780) Stream removed, broadcasting: 1\nI0524 23:41:36.165395 193 log.go:172] (0xc000b61290) Go away received\nI0524 23:41:36.165692 193 log.go:172] (0xc000b61290) (0xc000aec780) Stream removed, broadcasting: 1\nI0524 23:41:36.165709 193 log.go:172] (0xc000b61290) (0xc00050c320) Stream removed, broadcasting: 3\nI0524 23:41:36.165719 193 log.go:172] (0xc000b61290) (0xc0004a6e60) Stream removed, broadcasting: 5\n" May 24 23:41:36.171: INFO: stdout: "" May 24 23:41:36.171: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7067 execpod-affinity85xk5 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31130' May 24 23:41:36.392: INFO: stderr: "I0524 23:41:36.319795 215 log.go:172] (0xc000808a50) (0xc00039cf00) Create stream\nI0524 23:41:36.319872 215 log.go:172] (0xc000808a50) (0xc00039cf00) Stream added, broadcasting: 1\nI0524 23:41:36.323336 215 log.go:172] (0xc000808a50) Reply frame received for 1\nI0524 23:41:36.323395 215 log.go:172] (0xc000808a50) (0xc0001395e0) Create stream\nI0524 23:41:36.323415 215 log.go:172] (0xc000808a50) (0xc0001395e0) Stream added, broadcasting: 3\nI0524 23:41:36.324494 215 log.go:172] (0xc000808a50) Reply frame received for 3\nI0524 23:41:36.324541 215 log.go:172] (0xc000808a50) (0xc000a4e000) Create stream\nI0524 23:41:36.324556 215 log.go:172] (0xc000808a50) (0xc000a4e000) Stream added, broadcasting: 5\nI0524 23:41:36.325873 215 log.go:172] (0xc000808a50) Reply frame received for 5\nI0524 23:41:36.385402 215 log.go:172] (0xc000808a50) Data frame received for 3\nI0524 23:41:36.385477 215 log.go:172] (0xc0001395e0) (3) Data frame handling\nI0524 23:41:36.385515 215 log.go:172] (0xc000808a50) Data frame received for 5\nI0524 23:41:36.385541 215 log.go:172] (0xc000a4e000) (5) Data frame handling\nI0524 23:41:36.385568 215 log.go:172] (0xc000a4e000) (5) Data frame sent\nI0524 23:41:36.385588 215 log.go:172] (0xc000808a50) Data frame received for 5\nI0524 23:41:36.385609 215 log.go:172] (0xc000a4e000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31130\nConnection to 172.17.0.13 31130 port [tcp/31130] succeeded!\nI0524 23:41:36.387070 215 log.go:172] (0xc000808a50) Data frame received for 1\nI0524 23:41:36.387091 215 log.go:172] (0xc00039cf00) (1) Data frame handling\nI0524 23:41:36.387103 215 log.go:172] (0xc00039cf00) (1) Data frame sent\nI0524 23:41:36.387116 215 log.go:172] (0xc000808a50) (0xc00039cf00) Stream removed, broadcasting: 1\nI0524 23:41:36.387273 215 log.go:172] (0xc000808a50) Go away received\nI0524 23:41:36.387460 215 log.go:172] (0xc000808a50) (0xc00039cf00) Stream removed, broadcasting: 1\nI0524 23:41:36.387484 215 log.go:172] (0xc000808a50) (0xc0001395e0) Stream removed, broadcasting: 3\nI0524 23:41:36.387497 215 log.go:172] (0xc000808a50) (0xc000a4e000) Stream removed, broadcasting: 5\n" May 24 23:41:36.392: INFO: stdout: "" May 24 23:41:36.392: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7067 execpod-affinity85xk5 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31130' May 24 23:41:36.630: INFO: stderr: "I0524 23:41:36.542275 238 log.go:172] (0xc0005f8fd0) (0xc0009d2780) Create stream\nI0524 23:41:36.542365 238 log.go:172] (0xc0005f8fd0) (0xc0009d2780) Stream added, broadcasting: 1\nI0524 23:41:36.546913 238 log.go:172] (0xc0005f8fd0) Reply frame received for 1\nI0524 23:41:36.546960 238 log.go:172] (0xc0005f8fd0) (0xc000520960) Create stream\nI0524 23:41:36.546971 238 log.go:172] (0xc0005f8fd0) (0xc000520960) Stream added, broadcasting: 3\nI0524 23:41:36.547777 238 log.go:172] (0xc0005f8fd0) Reply frame received for 3\nI0524 23:41:36.547824 238 log.go:172] (0xc0005f8fd0) (0xc000520be0) Create stream\nI0524 23:41:36.547847 238 log.go:172] (0xc0005f8fd0) (0xc000520be0) Stream added, broadcasting: 5\nI0524 23:41:36.549363 238 log.go:172] (0xc0005f8fd0) Reply frame received for 5\nI0524 23:41:36.621085 238 log.go:172] (0xc0005f8fd0) Data frame received for 5\nI0524 23:41:36.621272 238 log.go:172] (0xc000520be0) (5) Data frame handling\nI0524 23:41:36.621300 238 log.go:172] (0xc000520be0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31130\nI0524 23:41:36.621597 238 log.go:172] (0xc0005f8fd0) Data frame received for 5\nI0524 23:41:36.621618 238 log.go:172] (0xc000520be0) (5) Data frame handling\nI0524 23:41:36.621636 238 log.go:172] (0xc000520be0) (5) Data frame sent\nConnection to 172.17.0.12 31130 port [tcp/31130] succeeded!\nI0524 23:41:36.621894 238 log.go:172] (0xc0005f8fd0) Data frame received for 3\nI0524 23:41:36.621921 238 log.go:172] (0xc000520960) (3) Data frame handling\nI0524 23:41:36.622061 238 log.go:172] (0xc0005f8fd0) Data frame received for 5\nI0524 23:41:36.622081 238 log.go:172] (0xc000520be0) (5) Data frame handling\nI0524 23:41:36.623446 238 log.go:172] (0xc0005f8fd0) Data frame received for 1\nI0524 23:41:36.623458 238 log.go:172] (0xc0009d2780) (1) Data frame handling\nI0524 23:41:36.623473 238 log.go:172] (0xc0009d2780) (1) Data frame sent\nI0524 23:41:36.623485 238 log.go:172] (0xc0005f8fd0) (0xc0009d2780) Stream removed, broadcasting: 1\nI0524 23:41:36.623499 238 log.go:172] (0xc0005f8fd0) Go away received\nI0524 23:41:36.623922 238 log.go:172] (0xc0005f8fd0) (0xc0009d2780) Stream removed, broadcasting: 1\nI0524 23:41:36.623949 238 log.go:172] (0xc0005f8fd0) (0xc000520960) Stream removed, broadcasting: 3\nI0524 23:41:36.623964 238 log.go:172] (0xc0005f8fd0) (0xc000520be0) Stream removed, broadcasting: 5\n" May 24 23:41:36.630: INFO: stdout: "" May 24 23:41:36.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7067 execpod-affinity85xk5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31130/ ; done' May 24 23:41:36.968: INFO: stderr: "I0524 23:41:36.760352 259 log.go:172] (0xc0009533f0) (0xc000aae6e0) Create stream\nI0524 23:41:36.760438 259 log.go:172] (0xc0009533f0) (0xc000aae6e0) Stream added, broadcasting: 1\nI0524 23:41:36.765688 259 log.go:172] (0xc0009533f0) Reply frame received for 1\nI0524 23:41:36.765734 259 log.go:172] (0xc0009533f0) (0xc00061edc0) Create stream\nI0524 23:41:36.765742 259 log.go:172] (0xc0009533f0) (0xc00061edc0) Stream added, broadcasting: 3\nI0524 23:41:36.766643 259 log.go:172] (0xc0009533f0) Reply frame received for 3\nI0524 23:41:36.766678 259 log.go:172] (0xc0009533f0) (0xc0003641e0) Create stream\nI0524 23:41:36.766688 259 log.go:172] (0xc0009533f0) (0xc0003641e0) Stream added, broadcasting: 5\nI0524 23:41:36.767540 259 log.go:172] (0xc0009533f0) Reply frame received for 5\nI0524 23:41:36.816844 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.816899 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.816925 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.816975 259 log.go:172] (0xc0009533f0) Data frame received for 5\nI0524 23:41:36.816998 259 log.go:172] (0xc0003641e0) (5) Data frame handling\nI0524 23:41:36.817016 259 log.go:172] (0xc0003641e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31130/\nI0524 23:41:36.876607 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.876646 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.876662 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.876826 259 log.go:172] (0xc0009533f0) Data frame received for 5\nI0524 23:41:36.876856 259 log.go:172] (0xc0003641e0) (5) Data frame handling\nI0524 23:41:36.876884 259 log.go:172] (0xc0003641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31130/\nI0524 23:41:36.877350 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.877387 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.877410 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.884026 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.884040 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.884048 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.885097 259 log.go:172] (0xc0009533f0) Data frame received for 5\nI0524 23:41:36.885329 259 log.go:172] (0xc0003641e0) (5) Data frame handling\nI0524 23:41:36.885354 259 log.go:172] (0xc0003641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31130/\nI0524 23:41:36.885381 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.885400 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.885429 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.891212 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.891237 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.891252 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.891916 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.891960 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.891988 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.892032 259 log.go:172] (0xc0009533f0) Data frame received for 5\nI0524 23:41:36.892075 259 log.go:172] (0xc0003641e0) (5) Data frame handling\nI0524 23:41:36.892131 259 log.go:172] (0xc0003641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31130/\nI0524 23:41:36.897565 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.897584 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.897594 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.898021 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.898033 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.898042 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.898073 259 log.go:172] (0xc0009533f0) Data frame received for 5\nI0524 23:41:36.898104 259 log.go:172] (0xc0003641e0) (5) Data frame handling\nI0524 23:41:36.898142 259 log.go:172] (0xc0003641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31130/\nI0524 23:41:36.903345 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.903381 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.903422 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.903574 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.903594 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.903602 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.903618 259 log.go:172] (0xc0009533f0) Data frame received for 5\nI0524 23:41:36.903628 259 log.go:172] (0xc0003641e0) (5) Data frame handling\nI0524 23:41:36.903636 259 log.go:172] (0xc0003641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31130/\nI0524 23:41:36.907316 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.907338 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.907351 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.907719 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.907763 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.907780 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.907795 259 log.go:172] (0xc0009533f0) Data frame received for 5\nI0524 23:41:36.907803 259 log.go:172] (0xc0003641e0) (5) Data frame handling\nI0524 23:41:36.907813 259 log.go:172] (0xc0003641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31130/\nI0524 23:41:36.912113 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.912147 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.912178 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.912520 259 log.go:172] (0xc0009533f0) Data frame received for 5\nI0524 23:41:36.912562 259 log.go:172] (0xc0003641e0) (5) Data frame handling\nI0524 23:41:36.912577 259 log.go:172] (0xc0003641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31130/\nI0524 23:41:36.912597 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.912610 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.912626 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.916714 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.916738 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.916759 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.917644 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.917664 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.917680 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.917700 259 log.go:172] (0xc0009533f0) Data frame received for 5\nI0524 23:41:36.917709 259 log.go:172] (0xc0003641e0) (5) Data frame handling\nI0524 23:41:36.917718 259 log.go:172] (0xc0003641e0) (5) Data frame sent\nI0524 23:41:36.917727 259 log.go:172] (0xc0009533f0) Data frame received for 5\nI0524 23:41:36.917736 259 log.go:172] (0xc0003641e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31130/\nI0524 23:41:36.917769 259 log.go:172] (0xc0003641e0) (5) Data frame sent\nI0524 23:41:36.922225 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.922269 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.922305 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.923041 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.923108 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.923129 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.923147 259 log.go:172] (0xc0009533f0) Data frame received for 5\nI0524 23:41:36.923158 259 log.go:172] (0xc0003641e0) (5) Data frame handling\nI0524 23:41:36.923168 259 log.go:172] (0xc0003641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31130/\nI0524 23:41:36.930750 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.930771 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.930792 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.931208 259 log.go:172] (0xc0009533f0) Data frame received for 5\nI0524 23:41:36.931225 259 log.go:172] (0xc0003641e0) (5) Data frame handling\nI0524 23:41:36.931245 259 log.go:172] (0xc0003641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31130/\nI0524 23:41:36.931328 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.931344 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.931360 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.935015 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.935037 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.935060 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.935605 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.935639 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.935658 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.935676 259 log.go:172] (0xc0009533f0) Data frame received for 5\nI0524 23:41:36.935687 259 log.go:172] (0xc0003641e0) (5) Data frame handling\nI0524 23:41:36.935696 259 log.go:172] (0xc0003641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31130/\nI0524 23:41:36.940246 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.940262 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.940274 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.940663 259 log.go:172] (0xc0009533f0) Data frame received for 5\nI0524 23:41:36.940704 259 log.go:172] (0xc0003641e0) (5) Data frame handling\nI0524 23:41:36.940719 259 log.go:172] (0xc0003641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31130/\nI0524 23:41:36.940742 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.940760 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.940786 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.944251 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.944279 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.944301 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.944863 259 log.go:172] (0xc0009533f0) Data frame received for 5\nI0524 23:41:36.944886 259 log.go:172] (0xc0003641e0) (5) Data frame handling\nI0524 23:41:36.944893 259 log.go:172] (0xc0003641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31130/\nI0524 23:41:36.944904 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.944910 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.944920 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.950510 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.950534 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.950573 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.951721 259 log.go:172] (0xc0009533f0) Data frame received for 5\nI0524 23:41:36.951739 259 log.go:172] (0xc0003641e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31130/\nI0524 23:41:36.951757 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.951777 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.951792 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.951827 259 log.go:172] (0xc0003641e0) (5) Data frame sent\nI0524 23:41:36.955998 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.956050 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.956073 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.956419 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.956451 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.956470 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.956485 259 log.go:172] (0xc0009533f0) Data frame received for 5\nI0524 23:41:36.956495 259 log.go:172] (0xc0003641e0) (5) Data frame handling\nI0524 23:41:36.956504 259 log.go:172] (0xc0003641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31130/\nI0524 23:41:36.960451 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.960471 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.960485 259 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0524 23:41:36.960981 259 log.go:172] (0xc0009533f0) Data frame received for 3\nI0524 23:41:36.960998 259 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0524 23:41:36.961060 259 log.go:172] (0xc0009533f0) Data frame received for 5\nI0524 23:41:36.961085 259 log.go:172] (0xc0003641e0) (5) Data frame handling\nI0524 23:41:36.963059 259 log.go:172] (0xc0009533f0) Data frame received for 1\nI0524 23:41:36.963081 259 log.go:172] (0xc000aae6e0) (1) Data frame handling\nI0524 23:41:36.963094 259 log.go:172] (0xc000aae6e0) (1) Data frame sent\nI0524 23:41:36.963111 259 log.go:172] (0xc0009533f0) (0xc000aae6e0) Stream removed, broadcasting: 1\nI0524 23:41:36.963146 259 log.go:172] (0xc0009533f0) Go away received\nI0524 23:41:36.963456 259 log.go:172] (0xc0009533f0) (0xc000aae6e0) Stream removed, broadcasting: 1\nI0524 23:41:36.963479 259 log.go:172] (0xc0009533f0) (0xc00061edc0) Stream removed, broadcasting: 3\nI0524 23:41:36.963490 259 log.go:172] (0xc0009533f0) (0xc0003641e0) Stream removed, broadcasting: 5\n" May 24 23:41:36.969: INFO: stdout: "\naffinity-nodeport-r92dw\naffinity-nodeport-r92dw\naffinity-nodeport-r92dw\naffinity-nodeport-r92dw\naffinity-nodeport-r92dw\naffinity-nodeport-r92dw\naffinity-nodeport-r92dw\naffinity-nodeport-r92dw\naffinity-nodeport-r92dw\naffinity-nodeport-r92dw\naffinity-nodeport-r92dw\naffinity-nodeport-r92dw\naffinity-nodeport-r92dw\naffinity-nodeport-r92dw\naffinity-nodeport-r92dw\naffinity-nodeport-r92dw" May 24 23:41:36.969: INFO: Received response from host: May 24 23:41:36.969: INFO: Received response from host: affinity-nodeport-r92dw May 24 23:41:36.969: INFO: Received response from host: affinity-nodeport-r92dw May 24 23:41:36.969: INFO: Received response from host: affinity-nodeport-r92dw May 24 23:41:36.969: INFO: Received response from host: affinity-nodeport-r92dw May 24 23:41:36.969: INFO: Received response from host: affinity-nodeport-r92dw May 24 23:41:36.969: INFO: Received response from host: affinity-nodeport-r92dw May 24 23:41:36.969: INFO: Received response from host: affinity-nodeport-r92dw May 24 23:41:36.969: INFO: Received response from host: affinity-nodeport-r92dw May 24 23:41:36.969: INFO: Received response from host: affinity-nodeport-r92dw May 24 23:41:36.969: INFO: Received response from host: affinity-nodeport-r92dw May 24 23:41:36.969: INFO: Received response from host: affinity-nodeport-r92dw May 24 23:41:36.969: INFO: Received response from host: affinity-nodeport-r92dw May 24 23:41:36.969: INFO: Received response from host: affinity-nodeport-r92dw May 24 23:41:36.969: INFO: Received response from host: affinity-nodeport-r92dw May 24 23:41:36.969: INFO: Received response from host: affinity-nodeport-r92dw May 24 23:41:36.969: INFO: Received response from host: affinity-nodeport-r92dw May 24 23:41:36.969: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-7067, will wait for the garbage collector to delete the pods May 24 23:41:37.108: INFO: Deleting ReplicationController affinity-nodeport took: 5.932285ms May 24 23:41:37.608: INFO: Terminating ReplicationController affinity-nodeport pods took: 500.237926ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:41:44.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7067" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:20.919 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":12,"skipped":111,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:41:44.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs May 24 23:41:45.054: INFO: Waiting up to 5m0s for pod "pod-eb0fc898-b9f5-4591-9e7c-7fcf18397203" in namespace "emptydir-45" to be "Succeeded or Failed" May 24 23:41:45.059: INFO: Pod "pod-eb0fc898-b9f5-4591-9e7c-7fcf18397203": Phase="Pending", Reason="", readiness=false. Elapsed: 4.8121ms May 24 23:41:47.063: INFO: Pod "pod-eb0fc898-b9f5-4591-9e7c-7fcf18397203": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008926399s May 24 23:41:49.079: INFO: Pod "pod-eb0fc898-b9f5-4591-9e7c-7fcf18397203": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024741741s STEP: Saw pod success May 24 23:41:49.079: INFO: Pod "pod-eb0fc898-b9f5-4591-9e7c-7fcf18397203" satisfied condition "Succeeded or Failed" May 24 23:41:49.082: INFO: Trying to get logs from node latest-worker pod pod-eb0fc898-b9f5-4591-9e7c-7fcf18397203 container test-container: STEP: delete the pod May 24 23:41:49.114: INFO: Waiting for pod pod-eb0fc898-b9f5-4591-9e7c-7fcf18397203 to disappear May 24 23:41:49.124: INFO: Pod pod-eb0fc898-b9f5-4591-9e7c-7fcf18397203 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:41:49.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-45" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":13,"skipped":123,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:41:49.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-46dc660d-6419-48bd-b9fc-89df751709c1 STEP: Creating configMap with name cm-test-opt-upd-1d9aa247-bc5e-4849-9ace-c31f0977a3f9 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-46dc660d-6419-48bd-b9fc-89df751709c1 STEP: Updating configmap cm-test-opt-upd-1d9aa247-bc5e-4849-9ace-c31f0977a3f9 STEP: Creating configMap with name cm-test-opt-create-b0178685-20a4-4b31-84fc-b3f26baafc58 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:41:59.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4434" for this suite. • [SLOW TEST:10.308 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":14,"skipped":131,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:41:59.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 24 23:41:59.597: INFO: Waiting up to 5m0s for pod "downwardapi-volume-deb95537-1c5a-4c1e-8622-bfef2f0f7b96" in namespace "downward-api-3059" to be "Succeeded or Failed" May 24 23:41:59.647: INFO: Pod "downwardapi-volume-deb95537-1c5a-4c1e-8622-bfef2f0f7b96": Phase="Pending", Reason="", readiness=false. Elapsed: 49.525893ms May 24 23:42:01.653: INFO: Pod "downwardapi-volume-deb95537-1c5a-4c1e-8622-bfef2f0f7b96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055993553s May 24 23:42:03.658: INFO: Pod "downwardapi-volume-deb95537-1c5a-4c1e-8622-bfef2f0f7b96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060413684s STEP: Saw pod success May 24 23:42:03.658: INFO: Pod "downwardapi-volume-deb95537-1c5a-4c1e-8622-bfef2f0f7b96" satisfied condition "Succeeded or Failed" May 24 23:42:03.661: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-deb95537-1c5a-4c1e-8622-bfef2f0f7b96 container client-container: STEP: delete the pod May 24 23:42:03.694: INFO: Waiting for pod downwardapi-volume-deb95537-1c5a-4c1e-8622-bfef2f0f7b96 to disappear May 24 23:42:03.712: INFO: Pod downwardapi-volume-deb95537-1c5a-4c1e-8622-bfef2f0f7b96 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:42:03.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3059" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":15,"skipped":133,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:42:03.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:42:12.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3702" for this suite. • [SLOW TEST:8.384 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":288,"completed":16,"skipped":134,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:42:12.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-8a378805-5df7-4bb0-b1ae-b9ae74ee23d3 STEP: Creating a pod to test consume secrets May 24 23:42:12.306: INFO: Waiting up to 5m0s for pod "pod-secrets-466f4973-2e81-43ed-ba48-da0351eb4a90" in namespace "secrets-254" to be "Succeeded or Failed" May 24 23:42:12.366: INFO: Pod "pod-secrets-466f4973-2e81-43ed-ba48-da0351eb4a90": Phase="Pending", Reason="", readiness=false. Elapsed: 59.608162ms May 24 23:42:14.464: INFO: Pod "pod-secrets-466f4973-2e81-43ed-ba48-da0351eb4a90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157388192s May 24 23:42:16.469: INFO: Pod "pod-secrets-466f4973-2e81-43ed-ba48-da0351eb4a90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.162279878s STEP: Saw pod success May 24 23:42:16.469: INFO: Pod "pod-secrets-466f4973-2e81-43ed-ba48-da0351eb4a90" satisfied condition "Succeeded or Failed" May 24 23:42:16.472: INFO: Trying to get logs from node latest-worker pod pod-secrets-466f4973-2e81-43ed-ba48-da0351eb4a90 container secret-volume-test: STEP: delete the pod May 24 23:42:16.527: INFO: Waiting for pod pod-secrets-466f4973-2e81-43ed-ba48-da0351eb4a90 to disappear May 24 23:42:16.563: INFO: Pod pod-secrets-466f4973-2e81-43ed-ba48-da0351eb4a90 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:42:16.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-254" for this suite. STEP: Destroying namespace "secret-namespace-4596" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":288,"completed":17,"skipped":146,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:42:16.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5942 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-5942 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5942 May 24 23:42:16.738: INFO: Found 0 stateful pods, waiting for 1 May 24 23:42:26.741: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 24 23:42:26.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 23:42:27.044: INFO: stderr: "I0524 23:42:26.887345 279 log.go:172] (0xc000989550) (0xc000c16460) Create stream\nI0524 23:42:26.887404 279 log.go:172] (0xc000989550) (0xc000c16460) Stream added, broadcasting: 1\nI0524 23:42:26.892250 279 log.go:172] (0xc000989550) Reply frame received for 1\nI0524 23:42:26.892279 279 log.go:172] (0xc000989550) (0xc00072a6e0) Create stream\nI0524 23:42:26.892289 279 log.go:172] (0xc000989550) (0xc00072a6e0) Stream added, broadcasting: 3\nI0524 23:42:26.893394 279 log.go:172] (0xc000989550) Reply frame received for 3\nI0524 23:42:26.893432 279 log.go:172] (0xc000989550) (0xc0007166e0) Create stream\nI0524 23:42:26.893458 279 log.go:172] (0xc000989550) (0xc0007166e0) Stream added, broadcasting: 5\nI0524 23:42:26.894560 279 log.go:172] (0xc000989550) Reply frame received for 5\nI0524 23:42:26.981634 279 log.go:172] (0xc000989550) Data frame received for 5\nI0524 23:42:26.981661 279 log.go:172] (0xc0007166e0) (5) Data frame handling\nI0524 23:42:26.981674 279 log.go:172] (0xc0007166e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 23:42:27.036874 279 log.go:172] (0xc000989550) Data frame received for 5\nI0524 23:42:27.036908 279 log.go:172] (0xc0007166e0) (5) Data frame handling\nI0524 23:42:27.036925 279 log.go:172] (0xc000989550) Data frame received for 3\nI0524 23:42:27.036930 279 log.go:172] (0xc00072a6e0) (3) Data frame handling\nI0524 23:42:27.036935 279 log.go:172] (0xc00072a6e0) (3) Data frame sent\nI0524 23:42:27.036941 279 log.go:172] (0xc000989550) Data frame received for 3\nI0524 23:42:27.036944 279 log.go:172] (0xc00072a6e0) (3) Data frame handling\nI0524 23:42:27.039063 279 log.go:172] (0xc000989550) Data frame received for 1\nI0524 23:42:27.039088 279 log.go:172] (0xc000c16460) (1) Data frame handling\nI0524 23:42:27.039109 279 log.go:172] (0xc000c16460) (1) Data frame sent\nI0524 23:42:27.039126 279 log.go:172] (0xc000989550) (0xc000c16460) Stream removed, broadcasting: 1\nI0524 23:42:27.039142 279 log.go:172] (0xc000989550) Go away received\nI0524 23:42:27.039407 279 log.go:172] (0xc000989550) (0xc000c16460) Stream removed, broadcasting: 1\nI0524 23:42:27.039419 279 log.go:172] (0xc000989550) (0xc00072a6e0) Stream removed, broadcasting: 3\nI0524 23:42:27.039425 279 log.go:172] (0xc000989550) (0xc0007166e0) Stream removed, broadcasting: 5\n" May 24 23:42:27.044: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 23:42:27.044: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 23:42:27.048: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 24 23:42:37.053: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 24 23:42:37.053: INFO: Waiting for statefulset status.replicas updated to 0 May 24 23:42:37.084: INFO: POD NODE PHASE GRACE CONDITIONS May 24 23:42:37.084: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:16 +0000 UTC }] May 24 23:42:37.084: INFO: May 24 23:42:37.084: INFO: StatefulSet ss has not reached scale 3, at 1 May 24 23:42:38.088: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.979422158s May 24 23:42:39.192: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.97509555s May 24 23:42:40.206: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.871519986s May 24 23:42:41.265: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.857162063s May 24 23:42:42.271: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.798329097s May 24 23:42:43.276: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.79227599s May 24 23:42:44.280: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.787631196s May 24 23:42:45.284: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.783812475s May 24 23:42:46.293: INFO: Verifying statefulset ss doesn't scale past 3 for another 779.199715ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5942 May 24 23:42:47.355: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:42:47.596: INFO: stderr: "I0524 23:42:47.524771 302 log.go:172] (0xc0009eae70) (0xc00068e500) Create stream\nI0524 23:42:47.524816 302 log.go:172] (0xc0009eae70) (0xc00068e500) Stream added, broadcasting: 1\nI0524 23:42:47.528609 302 log.go:172] (0xc0009eae70) Reply frame received for 1\nI0524 23:42:47.528647 302 log.go:172] (0xc0009eae70) (0xc0006acdc0) Create stream\nI0524 23:42:47.528675 302 log.go:172] (0xc0009eae70) (0xc0006acdc0) Stream added, broadcasting: 3\nI0524 23:42:47.529583 302 log.go:172] (0xc0009eae70) Reply frame received for 3\nI0524 23:42:47.529608 302 log.go:172] (0xc0009eae70) (0xc000584460) Create stream\nI0524 23:42:47.529619 302 log.go:172] (0xc0009eae70) (0xc000584460) Stream added, broadcasting: 5\nI0524 23:42:47.530363 302 log.go:172] (0xc0009eae70) Reply frame received for 5\nI0524 23:42:47.590535 302 log.go:172] (0xc0009eae70) Data frame received for 5\nI0524 23:42:47.590569 302 log.go:172] (0xc000584460) (5) Data frame handling\nI0524 23:42:47.590587 302 log.go:172] (0xc000584460) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0524 23:42:47.590827 302 log.go:172] (0xc0009eae70) Data frame received for 3\nI0524 23:42:47.590844 302 log.go:172] (0xc0006acdc0) (3) Data frame handling\nI0524 23:42:47.590859 302 log.go:172] (0xc0006acdc0) (3) Data frame sent\nI0524 23:42:47.591041 302 log.go:172] (0xc0009eae70) Data frame received for 5\nI0524 23:42:47.591056 302 log.go:172] (0xc000584460) (5) Data frame handling\nI0524 23:42:47.591067 302 log.go:172] (0xc0009eae70) Data frame received for 3\nI0524 23:42:47.591081 302 log.go:172] (0xc0006acdc0) (3) Data frame handling\nI0524 23:42:47.592513 302 log.go:172] (0xc0009eae70) Data frame received for 1\nI0524 23:42:47.592531 302 log.go:172] (0xc00068e500) (1) Data frame handling\nI0524 23:42:47.592539 302 log.go:172] (0xc00068e500) (1) Data frame sent\nI0524 23:42:47.592550 302 log.go:172] (0xc0009eae70) (0xc00068e500) Stream removed, broadcasting: 1\nI0524 23:42:47.592562 302 log.go:172] (0xc0009eae70) Go away received\nI0524 23:42:47.592874 302 log.go:172] (0xc0009eae70) (0xc00068e500) Stream removed, broadcasting: 1\nI0524 23:42:47.592887 302 log.go:172] (0xc0009eae70) (0xc0006acdc0) Stream removed, broadcasting: 3\nI0524 23:42:47.592895 302 log.go:172] (0xc0009eae70) (0xc000584460) Stream removed, broadcasting: 5\n" May 24 23:42:47.596: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 23:42:47.596: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 23:42:47.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:42:47.797: INFO: stderr: "I0524 23:42:47.716317 322 log.go:172] (0xc00060b080) (0xc000a768c0) Create stream\nI0524 23:42:47.716388 322 log.go:172] (0xc00060b080) (0xc000a768c0) Stream added, broadcasting: 1\nI0524 23:42:47.720714 322 log.go:172] (0xc00060b080) Reply frame received for 1\nI0524 23:42:47.720762 322 log.go:172] (0xc00060b080) (0xc000840f00) Create stream\nI0524 23:42:47.720779 322 log.go:172] (0xc00060b080) (0xc000840f00) Stream added, broadcasting: 3\nI0524 23:42:47.721886 322 log.go:172] (0xc00060b080) Reply frame received for 3\nI0524 23:42:47.721927 322 log.go:172] (0xc00060b080) (0xc0005665a0) Create stream\nI0524 23:42:47.721947 322 log.go:172] (0xc00060b080) (0xc0005665a0) Stream added, broadcasting: 5\nI0524 23:42:47.722838 322 log.go:172] (0xc00060b080) Reply frame received for 5\nI0524 23:42:47.790052 322 log.go:172] (0xc00060b080) Data frame received for 5\nI0524 23:42:47.790105 322 log.go:172] (0xc0005665a0) (5) Data frame handling\nI0524 23:42:47.790142 322 log.go:172] (0xc0005665a0) (5) Data frame sent\nI0524 23:42:47.790161 322 log.go:172] (0xc00060b080) Data frame received for 5\nI0524 23:42:47.790178 322 log.go:172] (0xc0005665a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0524 23:42:47.790321 322 log.go:172] (0xc00060b080) Data frame received for 3\nI0524 23:42:47.790333 322 log.go:172] (0xc000840f00) (3) Data frame handling\nI0524 23:42:47.790346 322 log.go:172] (0xc000840f00) (3) Data frame sent\nI0524 23:42:47.790360 322 log.go:172] (0xc00060b080) Data frame received for 3\nI0524 23:42:47.790366 322 log.go:172] (0xc000840f00) (3) Data frame handling\nI0524 23:42:47.791833 322 log.go:172] (0xc00060b080) Data frame received for 1\nI0524 23:42:47.791858 322 log.go:172] (0xc000a768c0) (1) Data frame handling\nI0524 23:42:47.791878 322 log.go:172] (0xc000a768c0) (1) Data frame sent\nI0524 23:42:47.791888 322 log.go:172] (0xc00060b080) (0xc000a768c0) Stream removed, broadcasting: 1\nI0524 23:42:47.791908 322 log.go:172] (0xc00060b080) Go away received\nI0524 23:42:47.792265 322 log.go:172] (0xc00060b080) (0xc000a768c0) Stream removed, broadcasting: 1\nI0524 23:42:47.792286 322 log.go:172] (0xc00060b080) (0xc000840f00) Stream removed, broadcasting: 3\nI0524 23:42:47.792297 322 log.go:172] (0xc00060b080) (0xc0005665a0) Stream removed, broadcasting: 5\n" May 24 23:42:47.797: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 23:42:47.797: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 23:42:47.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:42:48.024: INFO: stderr: "I0524 23:42:47.942045 343 log.go:172] (0xc000c813f0) (0xc000c681e0) Create stream\nI0524 23:42:47.942110 343 log.go:172] (0xc000c813f0) (0xc000c681e0) Stream added, broadcasting: 1\nI0524 23:42:47.944508 343 log.go:172] (0xc000c813f0) Reply frame received for 1\nI0524 23:42:47.944553 343 log.go:172] (0xc000c813f0) (0xc000b500a0) Create stream\nI0524 23:42:47.944567 343 log.go:172] (0xc000c813f0) (0xc000b500a0) Stream added, broadcasting: 3\nI0524 23:42:47.945988 343 log.go:172] (0xc000c813f0) Reply frame received for 3\nI0524 23:42:47.946030 343 log.go:172] (0xc000c813f0) (0xc000c68280) Create stream\nI0524 23:42:47.946045 343 log.go:172] (0xc000c813f0) (0xc000c68280) Stream added, broadcasting: 5\nI0524 23:42:47.947085 343 log.go:172] (0xc000c813f0) Reply frame received for 5\nI0524 23:42:48.016579 343 log.go:172] (0xc000c813f0) Data frame received for 3\nI0524 23:42:48.016611 343 log.go:172] (0xc000b500a0) (3) Data frame handling\nI0524 23:42:48.016628 343 log.go:172] (0xc000b500a0) (3) Data frame sent\nI0524 23:42:48.016639 343 log.go:172] (0xc000c813f0) Data frame received for 3\nI0524 23:42:48.016644 343 log.go:172] (0xc000b500a0) (3) Data frame handling\nI0524 23:42:48.016670 343 log.go:172] (0xc000c813f0) Data frame received for 5\nI0524 23:42:48.016678 343 log.go:172] (0xc000c68280) (5) Data frame handling\nI0524 23:42:48.016689 343 log.go:172] (0xc000c68280) (5) Data frame sent\nI0524 23:42:48.016698 343 log.go:172] (0xc000c813f0) Data frame received for 5\nI0524 23:42:48.016705 343 log.go:172] (0xc000c68280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0524 23:42:48.018314 343 log.go:172] (0xc000c813f0) Data frame received for 1\nI0524 23:42:48.018335 343 log.go:172] (0xc000c681e0) (1) Data frame handling\nI0524 23:42:48.018350 343 log.go:172] (0xc000c681e0) (1) Data frame sent\nI0524 23:42:48.018362 343 log.go:172] (0xc000c813f0) (0xc000c681e0) Stream removed, broadcasting: 1\nI0524 23:42:48.018371 343 log.go:172] (0xc000c813f0) Go away received\nI0524 23:42:48.018699 343 log.go:172] (0xc000c813f0) (0xc000c681e0) Stream removed, broadcasting: 1\nI0524 23:42:48.018717 343 log.go:172] (0xc000c813f0) (0xc000b500a0) Stream removed, broadcasting: 3\nI0524 23:42:48.018727 343 log.go:172] (0xc000c813f0) (0xc000c68280) Stream removed, broadcasting: 5\n" May 24 23:42:48.024: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 23:42:48.024: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 23:42:48.042: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 24 23:42:58.048: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 24 23:42:58.048: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 24 23:42:58.048: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 24 23:42:58.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 23:42:58.274: INFO: stderr: "I0524 23:42:58.189297 363 log.go:172] (0xc0009f6000) (0xc000139cc0) Create stream\nI0524 23:42:58.189348 363 log.go:172] (0xc0009f6000) (0xc000139cc0) Stream added, broadcasting: 1\nI0524 23:42:58.192497 363 log.go:172] (0xc0009f6000) Reply frame received for 1\nI0524 23:42:58.192529 363 log.go:172] (0xc0009f6000) (0xc000634d20) Create stream\nI0524 23:42:58.192537 363 log.go:172] (0xc0009f6000) (0xc000634d20) Stream added, broadcasting: 3\nI0524 23:42:58.193595 363 log.go:172] (0xc0009f6000) Reply frame received for 3\nI0524 23:42:58.193637 363 log.go:172] (0xc0009f6000) (0xc000635c20) Create stream\nI0524 23:42:58.193649 363 log.go:172] (0xc0009f6000) (0xc000635c20) Stream added, broadcasting: 5\nI0524 23:42:58.194777 363 log.go:172] (0xc0009f6000) Reply frame received for 5\nI0524 23:42:58.267235 363 log.go:172] (0xc0009f6000) Data frame received for 5\nI0524 23:42:58.267300 363 log.go:172] (0xc000635c20) (5) Data frame handling\nI0524 23:42:58.267313 363 log.go:172] (0xc000635c20) (5) Data frame sent\nI0524 23:42:58.267323 363 log.go:172] (0xc0009f6000) Data frame received for 5\nI0524 23:42:58.267330 363 log.go:172] (0xc000635c20) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 23:42:58.267357 363 log.go:172] (0xc0009f6000) Data frame received for 3\nI0524 23:42:58.267365 363 log.go:172] (0xc000634d20) (3) Data frame handling\nI0524 23:42:58.267381 363 log.go:172] (0xc000634d20) (3) Data frame sent\nI0524 23:42:58.267390 363 log.go:172] (0xc0009f6000) Data frame received for 3\nI0524 23:42:58.267397 363 log.go:172] (0xc000634d20) (3) Data frame handling\nI0524 23:42:58.268903 363 log.go:172] (0xc0009f6000) Data frame received for 1\nI0524 23:42:58.268922 363 log.go:172] (0xc000139cc0) (1) Data frame handling\nI0524 23:42:58.268933 363 log.go:172] (0xc000139cc0) (1) Data frame sent\nI0524 23:42:58.268949 363 log.go:172] (0xc0009f6000) (0xc000139cc0) Stream removed, broadcasting: 1\nI0524 23:42:58.269087 363 log.go:172] (0xc0009f6000) Go away received\nI0524 23:42:58.269377 363 log.go:172] (0xc0009f6000) (0xc000139cc0) Stream removed, broadcasting: 1\nI0524 23:42:58.269401 363 log.go:172] (0xc0009f6000) (0xc000634d20) Stream removed, broadcasting: 3\nI0524 23:42:58.269414 363 log.go:172] (0xc0009f6000) (0xc000635c20) Stream removed, broadcasting: 5\n" May 24 23:42:58.274: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 23:42:58.274: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 23:42:58.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 23:42:58.523: INFO: stderr: "I0524 23:42:58.407464 382 log.go:172] (0xc000986210) (0xc0003686e0) Create stream\nI0524 23:42:58.407530 382 log.go:172] (0xc000986210) (0xc0003686e0) Stream added, broadcasting: 1\nI0524 23:42:58.410383 382 log.go:172] (0xc000986210) Reply frame received for 1\nI0524 23:42:58.410421 382 log.go:172] (0xc000986210) (0xc000ac6000) Create stream\nI0524 23:42:58.410433 382 log.go:172] (0xc000986210) (0xc000ac6000) Stream added, broadcasting: 3\nI0524 23:42:58.411437 382 log.go:172] (0xc000986210) Reply frame received for 3\nI0524 23:42:58.411472 382 log.go:172] (0xc000986210) (0xc000369680) Create stream\nI0524 23:42:58.411484 382 log.go:172] (0xc000986210) (0xc000369680) Stream added, broadcasting: 5\nI0524 23:42:58.412420 382 log.go:172] (0xc000986210) Reply frame received for 5\nI0524 23:42:58.473320 382 log.go:172] (0xc000986210) Data frame received for 5\nI0524 23:42:58.473364 382 log.go:172] (0xc000369680) (5) Data frame handling\nI0524 23:42:58.473389 382 log.go:172] (0xc000369680) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 23:42:58.514866 382 log.go:172] (0xc000986210) Data frame received for 5\nI0524 23:42:58.514916 382 log.go:172] (0xc000369680) (5) Data frame handling\nI0524 23:42:58.514973 382 log.go:172] (0xc000986210) Data frame received for 3\nI0524 23:42:58.515031 382 log.go:172] (0xc000ac6000) (3) Data frame handling\nI0524 23:42:58.515062 382 log.go:172] (0xc000ac6000) (3) Data frame sent\nI0524 23:42:58.515084 382 log.go:172] (0xc000986210) Data frame received for 3\nI0524 23:42:58.515099 382 log.go:172] (0xc000ac6000) (3) Data frame handling\nI0524 23:42:58.517099 382 log.go:172] (0xc000986210) Data frame received for 1\nI0524 23:42:58.517333 382 log.go:172] (0xc0003686e0) (1) Data frame handling\nI0524 23:42:58.517363 382 log.go:172] (0xc0003686e0) (1) Data frame sent\nI0524 23:42:58.517387 382 log.go:172] (0xc000986210) (0xc0003686e0) Stream removed, broadcasting: 1\nI0524 23:42:58.517465 382 log.go:172] (0xc000986210) Go away received\nI0524 23:42:58.517904 382 log.go:172] (0xc000986210) (0xc0003686e0) Stream removed, broadcasting: 1\nI0524 23:42:58.517944 382 log.go:172] (0xc000986210) (0xc000ac6000) Stream removed, broadcasting: 3\nI0524 23:42:58.517968 382 log.go:172] (0xc000986210) (0xc000369680) Stream removed, broadcasting: 5\n" May 24 23:42:58.523: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 23:42:58.523: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 23:42:58.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 23:42:58.772: INFO: stderr: "I0524 23:42:58.661553 402 log.go:172] (0xc0009f2f20) (0xc000ae6320) Create stream\nI0524 23:42:58.661601 402 log.go:172] (0xc0009f2f20) (0xc000ae6320) Stream added, broadcasting: 1\nI0524 23:42:58.666460 402 log.go:172] (0xc0009f2f20) Reply frame received for 1\nI0524 23:42:58.666506 402 log.go:172] (0xc0009f2f20) (0xc00073e140) Create stream\nI0524 23:42:58.666522 402 log.go:172] (0xc0009f2f20) (0xc00073e140) Stream added, broadcasting: 3\nI0524 23:42:58.667476 402 log.go:172] (0xc0009f2f20) Reply frame received for 3\nI0524 23:42:58.667522 402 log.go:172] (0xc0009f2f20) (0xc00066a780) Create stream\nI0524 23:42:58.667537 402 log.go:172] (0xc0009f2f20) (0xc00066a780) Stream added, broadcasting: 5\nI0524 23:42:58.668341 402 log.go:172] (0xc0009f2f20) Reply frame received for 5\nI0524 23:42:58.742833 402 log.go:172] (0xc0009f2f20) Data frame received for 5\nI0524 23:42:58.742855 402 log.go:172] (0xc00066a780) (5) Data frame handling\nI0524 23:42:58.742870 402 log.go:172] (0xc00066a780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 23:42:58.764482 402 log.go:172] (0xc0009f2f20) Data frame received for 3\nI0524 23:42:58.764532 402 log.go:172] (0xc00073e140) (3) Data frame handling\nI0524 23:42:58.764566 402 log.go:172] (0xc00073e140) (3) Data frame sent\nI0524 23:42:58.765340 402 log.go:172] (0xc0009f2f20) Data frame received for 5\nI0524 23:42:58.765371 402 log.go:172] (0xc00066a780) (5) Data frame handling\nI0524 23:42:58.765391 402 log.go:172] (0xc0009f2f20) Data frame received for 3\nI0524 23:42:58.765405 402 log.go:172] (0xc00073e140) (3) Data frame handling\nI0524 23:42:58.767111 402 log.go:172] (0xc0009f2f20) Data frame received for 1\nI0524 23:42:58.767133 402 log.go:172] (0xc000ae6320) (1) Data frame handling\nI0524 23:42:58.767145 402 log.go:172] (0xc000ae6320) (1) Data frame sent\nI0524 23:42:58.767158 402 log.go:172] (0xc0009f2f20) (0xc000ae6320) Stream removed, broadcasting: 1\nI0524 23:42:58.767227 402 log.go:172] (0xc0009f2f20) Go away received\nI0524 23:42:58.767465 402 log.go:172] (0xc0009f2f20) (0xc000ae6320) Stream removed, broadcasting: 1\nI0524 23:42:58.767483 402 log.go:172] (0xc0009f2f20) (0xc00073e140) Stream removed, broadcasting: 3\nI0524 23:42:58.767493 402 log.go:172] (0xc0009f2f20) (0xc00066a780) Stream removed, broadcasting: 5\n" May 24 23:42:58.773: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 23:42:58.773: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 23:42:58.773: INFO: Waiting for statefulset status.replicas updated to 0 May 24 23:42:58.776: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 24 23:43:08.801: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 24 23:43:08.801: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 24 23:43:08.801: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 24 23:43:08.856: INFO: POD NODE PHASE GRACE CONDITIONS May 24 23:43:08.856: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:16 +0000 UTC }] May 24 23:43:08.856: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC }] May 24 23:43:08.856: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC }] May 24 23:43:08.856: INFO: May 24 23:43:08.856: INFO: StatefulSet ss has not reached scale 0, at 3 May 24 23:43:10.068: INFO: POD NODE PHASE GRACE CONDITIONS May 24 23:43:10.068: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:16 +0000 UTC }] May 24 23:43:10.068: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC }] May 24 23:43:10.068: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC }] May 24 23:43:10.068: INFO: May 24 23:43:10.068: INFO: StatefulSet ss has not reached scale 0, at 3 May 24 23:43:11.091: INFO: POD NODE PHASE GRACE CONDITIONS May 24 23:43:11.091: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:16 +0000 UTC }] May 24 23:43:11.091: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC }] May 24 23:43:11.091: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC }] May 24 23:43:11.091: INFO: May 24 23:43:11.091: INFO: StatefulSet ss has not reached scale 0, at 3 May 24 23:43:12.097: INFO: POD NODE PHASE GRACE CONDITIONS May 24 23:43:12.097: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:16 +0000 UTC }] May 24 23:43:12.097: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC }] May 24 23:43:12.097: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC }] May 24 23:43:12.097: INFO: May 24 23:43:12.097: INFO: StatefulSet ss has not reached scale 0, at 3 May 24 23:43:13.103: INFO: POD NODE PHASE GRACE CONDITIONS May 24 23:43:13.103: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:16 +0000 UTC }] May 24 23:43:13.103: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC }] May 24 23:43:13.103: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC }] May 24 23:43:13.103: INFO: May 24 23:43:13.103: INFO: StatefulSet ss has not reached scale 0, at 3 May 24 23:43:14.108: INFO: POD NODE PHASE GRACE CONDITIONS May 24 23:43:14.109: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:16 +0000 UTC }] May 24 23:43:14.109: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC }] May 24 23:43:14.109: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC }] May 24 23:43:14.109: INFO: May 24 23:43:14.109: INFO: StatefulSet ss has not reached scale 0, at 3 May 24 23:43:15.115: INFO: POD NODE PHASE GRACE CONDITIONS May 24 23:43:15.115: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC }] May 24 23:43:15.115: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC }] May 24 23:43:15.115: INFO: May 24 23:43:15.115: INFO: StatefulSet ss has not reached scale 0, at 2 May 24 23:43:16.120: INFO: POD NODE PHASE GRACE CONDITIONS May 24 23:43:16.120: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC }] May 24 23:43:16.120: INFO: May 24 23:43:16.120: INFO: StatefulSet ss has not reached scale 0, at 1 May 24 23:43:17.125: INFO: POD NODE PHASE GRACE CONDITIONS May 24 23:43:17.125: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC }] May 24 23:43:17.125: INFO: May 24 23:43:17.125: INFO: StatefulSet ss has not reached scale 0, at 1 May 24 23:43:18.130: INFO: POD NODE PHASE GRACE CONDITIONS May 24 23:43:18.130: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 23:42:37 +0000 UTC }] May 24 23:43:18.130: INFO: May 24 23:43:18.130: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5942 May 24 23:43:19.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:43:19.279: INFO: rc: 1 May 24 23:43:19.279: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 24 23:43:29.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:43:29.385: INFO: rc: 1 May 24 23:43:29.385: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:43:39.386: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:43:39.499: INFO: rc: 1 May 24 23:43:39.499: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:43:49.499: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:43:49.620: INFO: rc: 1 May 24 23:43:49.621: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:43:59.621: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:43:59.714: INFO: rc: 1 May 24 23:43:59.714: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:44:09.714: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:44:09.821: INFO: rc: 1 May 24 23:44:09.821: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:44:19.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:44:19.937: INFO: rc: 1 May 24 23:44:19.937: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:44:29.938: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:44:30.071: INFO: rc: 1 May 24 23:44:30.071: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:44:40.071: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:44:40.175: INFO: rc: 1 May 24 23:44:40.175: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:44:50.176: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:44:50.275: INFO: rc: 1 May 24 23:44:50.275: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:45:00.275: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:45:00.376: INFO: rc: 1 May 24 23:45:00.376: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:45:10.376: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:45:10.467: INFO: rc: 1 May 24 23:45:10.467: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:45:20.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:45:20.574: INFO: rc: 1 May 24 23:45:20.574: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:45:30.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:45:30.673: INFO: rc: 1 May 24 23:45:30.673: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:45:40.673: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:45:40.803: INFO: rc: 1 May 24 23:45:40.803: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:45:50.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:45:50.915: INFO: rc: 1 May 24 23:45:50.915: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:46:00.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:46:01.025: INFO: rc: 1 May 24 23:46:01.025: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:46:11.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:46:11.130: INFO: rc: 1 May 24 23:46:11.130: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:46:21.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:46:21.237: INFO: rc: 1 May 24 23:46:21.237: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:46:31.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:46:31.338: INFO: rc: 1 May 24 23:46:31.338: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:46:41.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:46:41.447: INFO: rc: 1 May 24 23:46:41.447: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:46:51.447: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:46:51.550: INFO: rc: 1 May 24 23:46:51.550: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:47:01.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:47:01.649: INFO: rc: 1 May 24 23:47:01.649: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:47:11.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:47:11.749: INFO: rc: 1 May 24 23:47:11.749: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:47:21.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:47:21.838: INFO: rc: 1 May 24 23:47:21.838: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:47:31.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:47:31.950: INFO: rc: 1 May 24 23:47:31.950: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:47:41.950: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:47:42.069: INFO: rc: 1 May 24 23:47:42.069: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:47:52.070: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:47:52.175: INFO: rc: 1 May 24 23:47:52.175: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:48:02.176: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:48:02.287: INFO: rc: 1 May 24 23:48:02.287: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:48:12.288: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:48:12.608: INFO: rc: 1 May 24 23:48:12.608: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 24 23:48:22.608: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5942 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:48:22.723: INFO: rc: 1 May 24 23:48:22.724: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: May 24 23:48:22.724: INFO: Scaling statefulset ss to 0 May 24 23:48:22.759: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 24 23:48:22.763: INFO: Deleting all statefulset in ns statefulset-5942 May 24 23:48:22.765: INFO: Scaling statefulset ss to 0 May 24 23:48:22.775: INFO: Waiting for statefulset status.replicas updated to 0 May 24 23:48:22.777: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:48:22.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5942" for this suite. • [SLOW TEST:366.236 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":288,"completed":18,"skipped":161,"failed":0} [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:48:22.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:49:22.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-720" for this suite. • [SLOW TEST:60.143 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":288,"completed":19,"skipped":161,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:49:22.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:49:36.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9437" for this suite. • [SLOW TEST:13.332 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":288,"completed":20,"skipped":172,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:49:36.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 24 23:49:36.386: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 23:49:36.435: INFO: Waiting for terminating namespaces to be deleted... May 24 23:49:36.438: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 24 23:49:36.455: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 24 23:49:36.455: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 24 23:49:36.455: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 24 23:49:36.455: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 24 23:49:36.455: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 24 23:49:36.455: INFO: Container kindnet-cni ready: true, restart count 0 May 24 23:49:36.455: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 24 23:49:36.455: INFO: Container kube-proxy ready: true, restart count 0 May 24 23:49:36.455: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 24 23:49:36.460: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 24 23:49:36.460: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 24 23:49:36.460: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 24 23:49:36.460: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 24 23:49:36.460: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 24 23:49:36.460: INFO: Container kindnet-cni ready: true, restart count 0 May 24 23:49:36.460: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 24 23:49:36.460: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-2990c487-9ae9-44c5-87f5-14f9fca7b454 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-2990c487-9ae9-44c5-87f5-14f9fca7b454 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-2990c487-9ae9-44c5-87f5-14f9fca7b454 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:49:52.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8860" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.499 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":288,"completed":21,"skipped":190,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:49:52.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-2576a1e7-7457-4c32-ae23-77b7ae24ecc8 STEP: Creating a pod to test consume configMaps May 24 23:49:52.892: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-819077bf-0d41-49d9-9748-4db19fa311df" in namespace "projected-1134" to be "Succeeded or Failed" May 24 23:49:52.894: INFO: Pod "pod-projected-configmaps-819077bf-0d41-49d9-9748-4db19fa311df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.358091ms May 24 23:49:54.898: INFO: Pod "pod-projected-configmaps-819077bf-0d41-49d9-9748-4db19fa311df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006368386s May 24 23:49:56.902: INFO: Pod "pod-projected-configmaps-819077bf-0d41-49d9-9748-4db19fa311df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009941035s STEP: Saw pod success May 24 23:49:56.902: INFO: Pod "pod-projected-configmaps-819077bf-0d41-49d9-9748-4db19fa311df" satisfied condition "Succeeded or Failed" May 24 23:49:56.904: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-819077bf-0d41-49d9-9748-4db19fa311df container projected-configmap-volume-test: STEP: delete the pod May 24 23:49:56.984: INFO: Waiting for pod pod-projected-configmaps-819077bf-0d41-49d9-9748-4db19fa311df to disappear May 24 23:49:57.001: INFO: Pod pod-projected-configmaps-819077bf-0d41-49d9-9748-4db19fa311df no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:49:57.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1134" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":22,"skipped":203,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:49:57.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:49:57.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-9205" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":288,"completed":23,"skipped":229,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:49:57.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-6477/configmap-test-d507ddf7-736a-4b58-946a-a70079c98138 STEP: Creating a pod to test consume configMaps May 24 23:49:57.212: INFO: Waiting up to 5m0s for pod "pod-configmaps-55bed507-45d4-4a05-ba5d-a56cbc1c21d7" in namespace "configmap-6477" to be "Succeeded or Failed" May 24 23:49:57.241: INFO: Pod "pod-configmaps-55bed507-45d4-4a05-ba5d-a56cbc1c21d7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.8207ms May 24 23:49:59.245: INFO: Pod "pod-configmaps-55bed507-45d4-4a05-ba5d-a56cbc1c21d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032659054s May 24 23:50:01.248: INFO: Pod "pod-configmaps-55bed507-45d4-4a05-ba5d-a56cbc1c21d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03531397s May 24 23:50:03.252: INFO: Pod "pod-configmaps-55bed507-45d4-4a05-ba5d-a56cbc1c21d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039079667s STEP: Saw pod success May 24 23:50:03.252: INFO: Pod "pod-configmaps-55bed507-45d4-4a05-ba5d-a56cbc1c21d7" satisfied condition "Succeeded or Failed" May 24 23:50:03.254: INFO: Trying to get logs from node latest-worker pod pod-configmaps-55bed507-45d4-4a05-ba5d-a56cbc1c21d7 container env-test: STEP: delete the pod May 24 23:50:03.295: INFO: Waiting for pod pod-configmaps-55bed507-45d4-4a05-ba5d-a56cbc1c21d7 to disappear May 24 23:50:03.322: INFO: Pod pod-configmaps-55bed507-45d4-4a05-ba5d-a56cbc1c21d7 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:50:03.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6477" for this suite. • [SLOW TEST:6.217 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":288,"completed":24,"skipped":246,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:50:03.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:50:09.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6297" for this suite. STEP: Destroying namespace "nsdeletetest-2430" for this suite. May 24 23:50:09.741: INFO: Namespace nsdeletetest-2430 was already deleted STEP: Destroying namespace "nsdeletetest-2621" for this suite. • [SLOW TEST:6.411 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":288,"completed":25,"skipped":254,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:50:09.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:50:16.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1032" for this suite. • [SLOW TEST:7.055 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":288,"completed":26,"skipped":261,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:50:16.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80 May 24 23:50:16.986: INFO: Waiting up to 1m0s for all nodes to be ready May 24 23:51:17.013: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:51:17.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 24 23:51:21.185: INFO: found a healthy node: latest-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 23:51:37.560: INFO: pods created so far: [1 1 1] May 24 23:51:37.560: INFO: length of pods created so far: 3 May 24 23:51:51.700: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:51:58.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-2859" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:51:58.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8345" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74 • [SLOW TEST:102.070 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":288,"completed":27,"skipped":265,"failed":0} SS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:51:58.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-2d52r in namespace proxy-7295 I0524 23:51:58.996274 7 runners.go:190] Created replication controller with name: proxy-service-2d52r, namespace: proxy-7295, replica count: 1 I0524 23:52:00.046775 7 runners.go:190] proxy-service-2d52r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 23:52:01.047114 7 runners.go:190] proxy-service-2d52r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 23:52:02.047415 7 runners.go:190] proxy-service-2d52r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 23:52:03.047659 7 runners.go:190] proxy-service-2d52r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0524 23:52:04.047948 7 runners.go:190] proxy-service-2d52r Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 23:52:04.185: INFO: setup took 5.267255551s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 24 23:52:04.366: INFO: (0) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 180.458636ms) May 24 23:52:04.367: INFO: (0) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:1080/proxy/: test<... (200; 181.806132ms) May 24 23:52:04.371: INFO: (0) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 185.517662ms) May 24 23:52:04.400: INFO: (0) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname1/proxy/: foo (200; 214.601369ms) May 24 23:52:04.400: INFO: (0) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:1080/proxy/: ... (200; 214.682219ms) May 24 23:52:04.404: INFO: (0) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname1/proxy/: foo (200; 218.599631ms) May 24 23:52:04.404: INFO: (0) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname2/proxy/: bar (200; 218.608395ms) May 24 23:52:04.404: INFO: (0) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv/proxy/: test (200; 218.716061ms) May 24 23:52:04.404: INFO: (0) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 218.802924ms) May 24 23:52:04.405: INFO: (0) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 219.781338ms) May 24 23:52:04.405: INFO: (0) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname2/proxy/: bar (200; 219.781547ms) May 24 23:52:04.434: INFO: (0) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname1/proxy/: tls baz (200; 249.155537ms) May 24 23:52:04.434: INFO: (0) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:460/proxy/: tls baz (200; 249.167946ms) May 24 23:52:04.434: INFO: (0) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:443/proxy/: test<... (200; 16.185549ms) May 24 23:52:04.451: INFO: (1) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 16.209384ms) May 24 23:52:04.452: INFO: (1) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:460/proxy/: tls baz (200; 17.754786ms) May 24 23:52:04.452: INFO: (1) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:443/proxy/: ... (200; 17.793417ms) May 24 23:52:04.453: INFO: (1) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv/proxy/: test (200; 18.673993ms) May 24 23:52:04.453: INFO: (1) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 18.765118ms) May 24 23:52:04.453: INFO: (1) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname1/proxy/: tls baz (200; 18.837791ms) May 24 23:52:04.454: INFO: (1) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 19.2438ms) May 24 23:52:04.454: INFO: (1) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:462/proxy/: tls qux (200; 19.335537ms) May 24 23:52:04.454: INFO: (1) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname1/proxy/: foo (200; 19.40755ms) May 24 23:52:04.454: INFO: (1) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname2/proxy/: tls qux (200; 19.330777ms) May 24 23:52:04.454: INFO: (1) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname2/proxy/: bar (200; 19.366653ms) May 24 23:52:04.454: INFO: (1) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname2/proxy/: bar (200; 19.42143ms) May 24 23:52:04.454: INFO: (1) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname1/proxy/: foo (200; 19.359147ms) May 24 23:52:04.454: INFO: (1) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 19.378931ms) May 24 23:52:04.707: INFO: (2) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 253.022135ms) May 24 23:52:04.707: INFO: (2) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:1080/proxy/: ... (200; 253.033316ms) May 24 23:52:04.707: INFO: (2) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 253.039459ms) May 24 23:52:04.707: INFO: (2) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:462/proxy/: tls qux (200; 253.11259ms) May 24 23:52:04.707: INFO: (2) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:1080/proxy/: test<... (200; 253.207573ms) May 24 23:52:04.708: INFO: (2) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv/proxy/: test (200; 253.788172ms) May 24 23:52:04.709: INFO: (2) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname2/proxy/: bar (200; 255.007639ms) May 24 23:52:04.711: INFO: (2) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname2/proxy/: bar (200; 257.456204ms) May 24 23:52:04.711: INFO: (2) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname1/proxy/: foo (200; 257.544633ms) May 24 23:52:04.712: INFO: (2) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname1/proxy/: foo (200; 257.432691ms) May 24 23:52:04.712: INFO: (2) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 257.525524ms) May 24 23:52:04.712: INFO: (2) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname2/proxy/: tls qux (200; 257.475806ms) May 24 23:52:04.712: INFO: (2) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 257.601782ms) May 24 23:52:04.712: INFO: (2) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:460/proxy/: tls baz (200; 257.608656ms) May 24 23:52:04.712: INFO: (2) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:443/proxy/: test (200; 65.225019ms) May 24 23:52:04.839: INFO: (3) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 64.867667ms) May 24 23:52:04.839: INFO: (3) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 65.070242ms) May 24 23:52:04.839: INFO: (3) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:1080/proxy/: ... (200; 65.118962ms) May 24 23:52:04.839: INFO: (3) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 65.158813ms) May 24 23:52:04.839: INFO: (3) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:443/proxy/: test<... (200; 65.611727ms) May 24 23:52:04.850: INFO: (3) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname2/proxy/: tls qux (200; 75.986268ms) May 24 23:52:04.870: INFO: (3) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname1/proxy/: foo (200; 96.440449ms) May 24 23:52:04.871: INFO: (3) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname2/proxy/: bar (200; 97.064001ms) May 24 23:52:04.871: INFO: (3) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname1/proxy/: tls baz (200; 96.7801ms) May 24 23:52:04.871: INFO: (3) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname1/proxy/: foo (200; 96.774439ms) May 24 23:52:04.872: INFO: (3) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname2/proxy/: bar (200; 97.880717ms) May 24 23:52:04.881: INFO: (4) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 9.328892ms) May 24 23:52:04.881: INFO: (4) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 9.409791ms) May 24 23:52:04.882: INFO: (4) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv/proxy/: test (200; 9.400277ms) May 24 23:52:04.882: INFO: (4) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:462/proxy/: tls qux (200; 9.328102ms) May 24 23:52:04.882: INFO: (4) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:1080/proxy/: test<... (200; 9.429493ms) May 24 23:52:04.882: INFO: (4) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:443/proxy/: ... (200; 9.902259ms) May 24 23:52:04.886: INFO: (4) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname1/proxy/: foo (200; 13.696393ms) May 24 23:52:04.886: INFO: (4) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname1/proxy/: tls baz (200; 13.859593ms) May 24 23:52:04.886: INFO: (4) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname2/proxy/: bar (200; 13.982439ms) May 24 23:52:04.886: INFO: (4) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname2/proxy/: bar (200; 13.959428ms) May 24 23:52:04.911: INFO: (5) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:1080/proxy/: ... (200; 24.899463ms) May 24 23:52:04.913: INFO: (5) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname2/proxy/: bar (200; 26.877199ms) May 24 23:52:04.913: INFO: (5) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname2/proxy/: bar (200; 27.028855ms) May 24 23:52:04.913: INFO: (5) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname1/proxy/: foo (200; 27.128868ms) May 24 23:52:04.913: INFO: (5) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname1/proxy/: foo (200; 26.991415ms) May 24 23:52:04.913: INFO: (5) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 27.050447ms) May 24 23:52:04.913: INFO: (5) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname2/proxy/: tls qux (200; 27.129233ms) May 24 23:52:04.913: INFO: (5) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname1/proxy/: tls baz (200; 27.192683ms) May 24 23:52:04.913: INFO: (5) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:1080/proxy/: test<... (200; 27.146302ms) May 24 23:52:04.913: INFO: (5) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 27.065744ms) May 24 23:52:04.914: INFO: (5) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 27.900635ms) May 24 23:52:04.914: INFO: (5) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 27.817413ms) May 24 23:52:04.914: INFO: (5) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv/proxy/: test (200; 27.885409ms) May 24 23:52:04.914: INFO: (5) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:460/proxy/: tls baz (200; 28.141095ms) May 24 23:52:04.914: INFO: (5) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:462/proxy/: tls qux (200; 28.148264ms) May 24 23:52:04.915: INFO: (5) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:443/proxy/: test<... (200; 20.800415ms) May 24 23:52:04.938: INFO: (6) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 22.82607ms) May 24 23:52:04.938: INFO: (6) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname1/proxy/: foo (200; 22.87184ms) May 24 23:52:04.938: INFO: (6) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:443/proxy/: ... (200; 24.252099ms) May 24 23:52:04.939: INFO: (6) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:462/proxy/: tls qux (200; 24.42418ms) May 24 23:52:04.940: INFO: (6) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 24.517283ms) May 24 23:52:04.940: INFO: (6) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv/proxy/: test (200; 24.830106ms) May 24 23:52:04.940: INFO: (6) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:460/proxy/: tls baz (200; 24.892475ms) May 24 23:52:04.940: INFO: (6) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname2/proxy/: bar (200; 24.901318ms) May 24 23:52:04.940: INFO: (6) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname2/proxy/: bar (200; 24.960299ms) May 24 23:52:04.959: INFO: (7) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 18.775492ms) May 24 23:52:04.959: INFO: (7) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv/proxy/: test (200; 18.812911ms) May 24 23:52:04.959: INFO: (7) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:460/proxy/: tls baz (200; 19.028901ms) May 24 23:52:04.961: INFO: (7) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 21.242221ms) May 24 23:52:04.961: INFO: (7) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname1/proxy/: tls baz (200; 21.24687ms) May 24 23:52:04.961: INFO: (7) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname2/proxy/: tls qux (200; 21.327001ms) May 24 23:52:04.961: INFO: (7) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:1080/proxy/: ... (200; 21.442397ms) May 24 23:52:04.961: INFO: (7) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:462/proxy/: tls qux (200; 21.261434ms) May 24 23:52:04.961: INFO: (7) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:443/proxy/: test<... (200; 21.301972ms) May 24 23:52:04.962: INFO: (7) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname2/proxy/: bar (200; 21.492475ms) May 24 23:52:04.962: INFO: (7) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 21.589146ms) May 24 23:52:04.962: INFO: (7) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname1/proxy/: foo (200; 21.738962ms) May 24 23:52:04.962: INFO: (7) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname2/proxy/: bar (200; 21.786346ms) May 24 23:52:05.031: INFO: (8) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:443/proxy/: ... (200; 68.791417ms) May 24 23:52:05.031: INFO: (8) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv/proxy/: test (200; 68.863255ms) May 24 23:52:05.031: INFO: (8) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 69.023721ms) May 24 23:52:05.031: INFO: (8) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 69.267393ms) May 24 23:52:05.031: INFO: (8) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 69.32951ms) May 24 23:52:05.032: INFO: (8) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 69.934667ms) May 24 23:52:05.032: INFO: (8) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname2/proxy/: bar (200; 70.111728ms) May 24 23:52:05.032: INFO: (8) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:1080/proxy/: test<... (200; 70.20077ms) May 24 23:52:05.032: INFO: (8) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:460/proxy/: tls baz (200; 70.191534ms) May 24 23:52:05.032: INFO: (8) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname1/proxy/: foo (200; 70.113852ms) May 24 23:52:05.032: INFO: (8) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname2/proxy/: bar (200; 70.152289ms) May 24 23:52:05.033: INFO: (8) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname1/proxy/: tls baz (200; 70.907521ms) May 24 23:52:05.033: INFO: (8) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:462/proxy/: tls qux (200; 70.930349ms) May 24 23:52:05.033: INFO: (8) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname2/proxy/: tls qux (200; 71.293157ms) May 24 23:52:05.054: INFO: (8) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname1/proxy/: foo (200; 91.822685ms) May 24 23:52:05.200: INFO: (9) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:460/proxy/: tls baz (200; 145.450855ms) May 24 23:52:05.200: INFO: (9) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:1080/proxy/: ... (200; 145.699ms) May 24 23:52:05.201: INFO: (9) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 146.911644ms) May 24 23:52:05.201: INFO: (9) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:1080/proxy/: test<... (200; 147.015276ms) May 24 23:52:05.201: INFO: (9) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:443/proxy/: test (200; 149.014244ms) May 24 23:52:05.203: INFO: (9) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:462/proxy/: tls qux (200; 149.11816ms) May 24 23:52:05.204: INFO: (9) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname2/proxy/: bar (200; 149.552134ms) May 24 23:52:05.204: INFO: (9) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname1/proxy/: foo (200; 149.385016ms) May 24 23:52:05.217: INFO: (9) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname1/proxy/: tls baz (200; 163.054336ms) May 24 23:52:05.223: INFO: (10) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:462/proxy/: tls qux (200; 5.926094ms) May 24 23:52:05.264: INFO: (10) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname1/proxy/: tls baz (200; 46.629249ms) May 24 23:52:05.264: INFO: (10) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 46.148235ms) May 24 23:52:05.264: INFO: (10) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 46.09864ms) May 24 23:52:05.264: INFO: (10) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv/proxy/: test (200; 46.376927ms) May 24 23:52:05.264: INFO: (10) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname2/proxy/: bar (200; 46.848965ms) May 24 23:52:05.264: INFO: (10) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 46.14696ms) May 24 23:52:05.265: INFO: (10) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:460/proxy/: tls baz (200; 47.186617ms) May 24 23:52:05.265: INFO: (10) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:1080/proxy/: ... (200; 47.331731ms) May 24 23:52:05.265: INFO: (10) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:1080/proxy/: test<... (200; 47.399137ms) May 24 23:52:05.265: INFO: (10) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname1/proxy/: foo (200; 47.640342ms) May 24 23:52:05.265: INFO: (10) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname1/proxy/: foo (200; 46.787105ms) May 24 23:52:05.266: INFO: (10) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname2/proxy/: tls qux (200; 47.644097ms) May 24 23:52:05.266: INFO: (10) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname2/proxy/: bar (200; 47.369318ms) May 24 23:52:05.266: INFO: (10) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 47.462179ms) May 24 23:52:05.266: INFO: (10) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:443/proxy/: test (200; 172.567061ms) May 24 23:52:05.439: INFO: (11) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname2/proxy/: tls qux (200; 172.38693ms) May 24 23:52:05.439: INFO: (11) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:1080/proxy/: test<... (200; 172.87907ms) May 24 23:52:05.439: INFO: (11) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:460/proxy/: tls baz (200; 173.343697ms) May 24 23:52:05.439: INFO: (11) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:1080/proxy/: ... (200; 173.260229ms) May 24 23:52:05.440: INFO: (11) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 173.818608ms) May 24 23:52:05.636: INFO: (11) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname1/proxy/: foo (200; 369.832594ms) May 24 23:52:05.642: INFO: (11) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname2/proxy/: bar (200; 375.707798ms) May 24 23:52:05.642: INFO: (11) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname1/proxy/: tls baz (200; 375.976435ms) May 24 23:52:05.647: INFO: (12) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv/proxy/: test (200; 5.115619ms) May 24 23:52:05.653: INFO: (12) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 10.928879ms) May 24 23:52:05.653: INFO: (12) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:1080/proxy/: ... (200; 10.922076ms) May 24 23:52:05.654: INFO: (12) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:1080/proxy/: test<... (200; 11.493917ms) May 24 23:52:05.654: INFO: (12) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 12.098342ms) May 24 23:52:05.655: INFO: (12) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 12.398165ms) May 24 23:52:05.655: INFO: (12) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:443/proxy/: test (200; 42.847345ms) May 24 23:52:05.702: INFO: (13) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 42.703249ms) May 24 23:52:05.703: INFO: (13) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname2/proxy/: bar (200; 43.428873ms) May 24 23:52:05.703: INFO: (13) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 43.616418ms) May 24 23:52:05.703: INFO: (13) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:1080/proxy/: ... (200; 43.573944ms) May 24 23:52:05.706: INFO: (13) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:1080/proxy/: test<... (200; 46.072522ms) May 24 23:52:05.706: INFO: (13) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname2/proxy/: bar (200; 46.002245ms) May 24 23:52:05.706: INFO: (13) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname1/proxy/: foo (200; 46.069455ms) May 24 23:52:05.706: INFO: (13) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:443/proxy/: test<... (200; 44.02048ms) May 24 23:52:05.751: INFO: (14) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:443/proxy/: test (200; 46.252313ms) May 24 23:52:05.752: INFO: (14) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname1/proxy/: tls baz (200; 46.272279ms) May 24 23:52:05.753: INFO: (14) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname1/proxy/: foo (200; 46.865551ms) May 24 23:52:05.753: INFO: (14) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:462/proxy/: tls qux (200; 46.877026ms) May 24 23:52:05.753: INFO: (14) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname2/proxy/: bar (200; 46.958563ms) May 24 23:52:05.753: INFO: (14) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:460/proxy/: tls baz (200; 47.406145ms) May 24 23:52:05.753: INFO: (14) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 47.557001ms) May 24 23:52:05.754: INFO: (14) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname2/proxy/: tls qux (200; 47.459004ms) May 24 23:52:05.754: INFO: (14) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname2/proxy/: bar (200; 47.733489ms) May 24 23:52:05.754: INFO: (14) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:1080/proxy/: ... (200; 48.169797ms) May 24 23:52:05.754: INFO: (14) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname1/proxy/: foo (200; 48.39748ms) May 24 23:52:05.762: INFO: (15) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 7.676762ms) May 24 23:52:05.762: INFO: (15) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:462/proxy/: tls qux (200; 7.847368ms) May 24 23:52:05.763: INFO: (15) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:443/proxy/: ... (200; 7.882302ms) May 24 23:52:05.763: INFO: (15) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname2/proxy/: bar (200; 8.16542ms) May 24 23:52:05.763: INFO: (15) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname2/proxy/: tls qux (200; 7.930345ms) May 24 23:52:05.763: INFO: (15) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname1/proxy/: tls baz (200; 8.280884ms) May 24 23:52:05.763: INFO: (15) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:1080/proxy/: test<... (200; 8.358813ms) May 24 23:52:05.763: INFO: (15) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 8.792719ms) May 24 23:52:05.763: INFO: (15) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 8.876536ms) May 24 23:52:05.764: INFO: (15) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 8.883375ms) May 24 23:52:05.764: INFO: (15) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname2/proxy/: bar (200; 8.997325ms) May 24 23:52:05.764: INFO: (15) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname1/proxy/: foo (200; 9.047231ms) May 24 23:52:05.764: INFO: (15) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname1/proxy/: foo (200; 9.202593ms) May 24 23:52:05.764: INFO: (15) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv/proxy/: test (200; 9.422957ms) May 24 23:52:05.776: INFO: (16) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 12.014738ms) May 24 23:52:05.777: INFO: (16) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 12.91491ms) May 24 23:52:05.777: INFO: (16) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname2/proxy/: bar (200; 13.047929ms) May 24 23:52:05.777: INFO: (16) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 13.133659ms) May 24 23:52:05.777: INFO: (16) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:460/proxy/: tls baz (200; 13.291008ms) May 24 23:52:05.777: INFO: (16) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname1/proxy/: tls baz (200; 13.437641ms) May 24 23:52:05.778: INFO: (16) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:462/proxy/: tls qux (200; 13.505309ms) May 24 23:52:05.778: INFO: (16) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 13.661561ms) May 24 23:52:05.778: INFO: (16) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv/proxy/: test (200; 13.718029ms) May 24 23:52:05.778: INFO: (16) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:1080/proxy/: ... (200; 13.953777ms) May 24 23:52:05.779: INFO: (16) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:1080/proxy/: test<... (200; 14.537318ms) May 24 23:52:05.779: INFO: (16) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:443/proxy/: test<... (200; 4.658236ms) May 24 23:52:05.791: INFO: (17) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:462/proxy/: tls qux (200; 11.933694ms) May 24 23:52:05.791: INFO: (17) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 12.138846ms) May 24 23:52:05.791: INFO: (17) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 12.154862ms) May 24 23:52:05.792: INFO: (17) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv/proxy/: test (200; 12.46349ms) May 24 23:52:05.792: INFO: (17) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:460/proxy/: tls baz (200; 12.4769ms) May 24 23:52:05.792: INFO: (17) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:1080/proxy/: ... (200; 12.631588ms) May 24 23:52:05.792: INFO: (17) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 12.63298ms) May 24 23:52:05.792: INFO: (17) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 12.774135ms) May 24 23:52:05.792: INFO: (17) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:443/proxy/: test (200; 14.352932ms) May 24 23:52:05.828: INFO: (18) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 14.400764ms) May 24 23:52:05.828: INFO: (18) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:460/proxy/: tls baz (200; 14.572283ms) May 24 23:52:05.828: INFO: (18) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:1080/proxy/: test<... (200; 14.523243ms) May 24 23:52:05.828: INFO: (18) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 14.521876ms) May 24 23:52:05.829: INFO: (18) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:162/proxy/: bar (200; 14.930257ms) May 24 23:52:05.829: INFO: (18) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:1080/proxy/: ... (200; 15.017461ms) May 24 23:52:05.829: INFO: (18) /api/v1/namespaces/proxy-7295/pods/http:proxy-service-2d52r-d8tgv:160/proxy/: foo (200; 15.087043ms) May 24 23:52:05.829: INFO: (18) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:462/proxy/: tls qux (200; 15.243246ms) May 24 23:52:05.829: INFO: (18) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:443/proxy/: test<... (200; 7.630526ms) May 24 23:52:05.842: INFO: (19) /api/v1/namespaces/proxy-7295/pods/proxy-service-2d52r-d8tgv/proxy/: test (200; 7.704496ms) May 24 23:52:05.842: INFO: (19) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:462/proxy/: tls qux (200; 7.832813ms) May 24 23:52:05.842: INFO: (19) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:443/proxy/: ... (200; 8.109273ms) May 24 23:52:05.842: INFO: (19) /api/v1/namespaces/proxy-7295/pods/https:proxy-service-2d52r-d8tgv:460/proxy/: tls baz (200; 8.419552ms) May 24 23:52:05.870: INFO: (19) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname1/proxy/: tls baz (200; 35.959816ms) May 24 23:52:05.870: INFO: (19) /api/v1/namespaces/proxy-7295/services/https:proxy-service-2d52r:tlsportname2/proxy/: tls qux (200; 36.040119ms) May 24 23:52:05.870: INFO: (19) /api/v1/namespaces/proxy-7295/services/proxy-service-2d52r:portname2/proxy/: bar (200; 36.163159ms) May 24 23:52:05.870: INFO: (19) /api/v1/namespaces/proxy-7295/services/http:proxy-service-2d52r:portname2/proxy/: bar (200; 36.514537ms) STEP: deleting ReplicationController proxy-service-2d52r in namespace proxy-7295, will wait for the garbage collector to delete the pods May 24 23:52:05.949: INFO: Deleting ReplicationController proxy-service-2d52r took: 18.798095ms May 24 23:52:06.250: INFO: Terminating ReplicationController proxy-service-2d52r pods took: 300.28587ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:52:14.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7295" for this suite. • [SLOW TEST:16.087 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":288,"completed":28,"skipped":267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:52:14.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 24 23:52:15.047: INFO: PodSpec: initContainers in spec.initContainers May 24 23:53:07.811: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-578c26b4-2f3e-4080-85b0-7b501d078dbc", GenerateName:"", Namespace:"init-container-6972", SelfLink:"/api/v1/namespaces/init-container-6972/pods/pod-init-578c26b4-2f3e-4080-85b0-7b501d078dbc", UID:"6f927b06-3a32-4642-ad9f-d667949950dd", ResourceVersion:"7408709", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725961135, loc:(*time.Location)(0x7c342a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"47908153"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003992040), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003992060)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003992080), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0039920a0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-h54ns", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0019aa2c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-h54ns", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-h54ns", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-h54ns", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002f7e188), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0009fa000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f7e2d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f7e2f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002f7e2f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002f7e2fc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961135, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961135, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961135, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961135, loc:(*time.Location)(0x7c342a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.1.60", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.60"}}, StartTime:(*v1.Time)(0xc0039920c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0009fa0e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0009fa150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://7f541f3db1459813be75fef49c457cebe0568e67bb173300ef8330cf38f785ac", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003992100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0039920e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc002f7e3af)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:53:07.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6972" for this suite. • [SLOW TEST:52.897 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":288,"completed":29,"skipped":296,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:53:07.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 23:53:07.987: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 24 23:53:10.125: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:53:11.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6201" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":288,"completed":30,"skipped":304,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:53:11.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 23:53:12.558: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 23:53:14.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961192, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961192, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961192, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961192, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 23:53:17.743: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 23:53:17.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7094-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:53:18.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2423" for this suite. STEP: Destroying namespace "webhook-2423-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.042 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":288,"completed":31,"skipped":309,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:53:19.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 23:53:19.357: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6886' May 24 23:53:22.919: INFO: stderr: "" May 24 23:53:22.919: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 24 23:53:22.920: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6886' May 24 23:53:23.281: INFO: stderr: "" May 24 23:53:23.281: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 24 23:53:24.294: INFO: Selector matched 1 pods for map[app:agnhost] May 24 23:53:24.294: INFO: Found 0 / 1 May 24 23:53:25.396: INFO: Selector matched 1 pods for map[app:agnhost] May 24 23:53:25.396: INFO: Found 0 / 1 May 24 23:53:26.286: INFO: Selector matched 1 pods for map[app:agnhost] May 24 23:53:26.286: INFO: Found 0 / 1 May 24 23:53:27.286: INFO: Selector matched 1 pods for map[app:agnhost] May 24 23:53:27.286: INFO: Found 1 / 1 May 24 23:53:27.286: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 24 23:53:27.289: INFO: Selector matched 1 pods for map[app:agnhost] May 24 23:53:27.289: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 24 23:53:27.289: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-dgvs5 --namespace=kubectl-6886' May 24 23:53:27.406: INFO: stderr: "" May 24 23:53:27.406: INFO: stdout: "Name: agnhost-master-dgvs5\nNamespace: kubectl-6886\nPriority: 0\nNode: latest-worker2/172.17.0.12\nStart Time: Sun, 24 May 2020 23:53:23 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.65\nIPs:\n IP: 10.244.2.65\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://80d0ac157609ea0ceec1264fc1f958247cf6b4096942b7344d348f50478f8ee4\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 24 May 2020 23:53:25 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-h7snf (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-h7snf:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-h7snf\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-6886/agnhost-master-dgvs5 to latest-worker2\n Normal Pulled 3s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 2s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 2s kubelet, latest-worker2 Started container agnhost-master\n" May 24 23:53:27.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-6886' May 24 23:53:27.569: INFO: stderr: "" May 24 23:53:27.569: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-6886\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-dgvs5\n" May 24 23:53:27.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-6886' May 24 23:53:27.685: INFO: stderr: "" May 24 23:53:27.685: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-6886\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.105.113.208\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.65:6379\nSession Affinity: None\nEvents: \n" May 24 23:53:27.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' May 24 23:53:27.830: INFO: stderr: "" May 24 23:53:27.830: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Sun, 24 May 2020 23:53:19 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 24 May 2020 23:52:41 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 24 May 2020 23:52:41 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 24 May 2020 23:52:41 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 24 May 2020 23:52:41 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 25d\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 25d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25d\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 25d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 25d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 25d\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 25d\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 24 23:53:27.831: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-6886' May 24 23:53:27.938: INFO: stderr: "" May 24 23:53:27.938: INFO: stdout: "Name: kubectl-6886\nLabels: e2e-framework=kubectl\n e2e-run=527e4af2-4a80-4010-89b2-297a700173c5\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:53:27.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6886" for this suite. • [SLOW TEST:8.763 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1083 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":288,"completed":32,"skipped":321,"failed":0} SSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:53:27.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 23:53:28.036: INFO: Creating ReplicaSet my-hostname-basic-59346838-194f-4e4b-9225-a3e169294b80 May 24 23:53:28.120: INFO: Pod name my-hostname-basic-59346838-194f-4e4b-9225-a3e169294b80: Found 0 pods out of 1 May 24 23:53:33.174: INFO: Pod name my-hostname-basic-59346838-194f-4e4b-9225-a3e169294b80: Found 1 pods out of 1 May 24 23:53:33.174: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-59346838-194f-4e4b-9225-a3e169294b80" is running May 24 23:53:33.178: INFO: Pod "my-hostname-basic-59346838-194f-4e4b-9225-a3e169294b80-l55zc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 23:53:28 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 23:53:30 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 23:53:30 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 23:53:28 +0000 UTC Reason: Message:}]) May 24 23:53:33.179: INFO: Trying to dial the pod May 24 23:53:38.191: INFO: Controller my-hostname-basic-59346838-194f-4e4b-9225-a3e169294b80: Got expected result from replica 1 [my-hostname-basic-59346838-194f-4e4b-9225-a3e169294b80-l55zc]: "my-hostname-basic-59346838-194f-4e4b-9225-a3e169294b80-l55zc", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:53:38.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9599" for this suite. • [SLOW TEST:10.252 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":33,"skipped":325,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:53:38.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-e72c8b30-f3fe-411c-a3ed-2d319567220b STEP: Creating a pod to test consume secrets May 24 23:53:38.343: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e20c6516-94a2-454f-95f3-05c44c285d74" in namespace "projected-6880" to be "Succeeded or Failed" May 24 23:53:38.406: INFO: Pod "pod-projected-secrets-e20c6516-94a2-454f-95f3-05c44c285d74": Phase="Pending", Reason="", readiness=false. Elapsed: 63.087849ms May 24 23:53:40.467: INFO: Pod "pod-projected-secrets-e20c6516-94a2-454f-95f3-05c44c285d74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124678632s May 24 23:53:42.472: INFO: Pod "pod-projected-secrets-e20c6516-94a2-454f-95f3-05c44c285d74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129397578s STEP: Saw pod success May 24 23:53:42.472: INFO: Pod "pod-projected-secrets-e20c6516-94a2-454f-95f3-05c44c285d74" satisfied condition "Succeeded or Failed" May 24 23:53:42.476: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-e20c6516-94a2-454f-95f3-05c44c285d74 container projected-secret-volume-test: STEP: delete the pod May 24 23:53:42.561: INFO: Waiting for pod pod-projected-secrets-e20c6516-94a2-454f-95f3-05c44c285d74 to disappear May 24 23:53:42.574: INFO: Pod pod-projected-secrets-e20c6516-94a2-454f-95f3-05c44c285d74 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:53:42.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6880" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":34,"skipped":349,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:53:42.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 24 23:53:45.795: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:53:45.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8172" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":35,"skipped":350,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:53:45.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 23:53:46.402: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 24 23:53:48.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-18 create -f -' May 24 23:53:51.705: INFO: stderr: "" May 24 23:53:51.705: INFO: stdout: "e2e-test-crd-publish-openapi-8325-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 24 23:53:51.705: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-18 delete e2e-test-crd-publish-openapi-8325-crds test-cr' May 24 23:53:51.830: INFO: stderr: "" May 24 23:53:51.830: INFO: stdout: "e2e-test-crd-publish-openapi-8325-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 24 23:53:51.830: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-18 apply -f -' May 24 23:53:52.102: INFO: stderr: "" May 24 23:53:52.102: INFO: stdout: "e2e-test-crd-publish-openapi-8325-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 24 23:53:52.102: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-18 delete e2e-test-crd-publish-openapi-8325-crds test-cr' May 24 23:53:52.206: INFO: stderr: "" May 24 23:53:52.206: INFO: stdout: "e2e-test-crd-publish-openapi-8325-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 24 23:53:52.206: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8325-crds' May 24 23:53:52.448: INFO: stderr: "" May 24 23:53:52.448: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8325-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:53:55.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-18" for this suite. • [SLOW TEST:9.573 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":288,"completed":36,"skipped":358,"failed":0} SS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:53:55.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7827 STEP: creating service affinity-clusterip in namespace services-7827 STEP: creating replication controller affinity-clusterip in namespace services-7827 I0524 23:53:55.564046 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-7827, replica count: 3 I0524 23:53:58.614493 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 23:54:01.614766 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 23:54:01.619: INFO: Creating new exec pod May 24 23:54:06.640: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7827 execpod-affinity5xsf2 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' May 24 23:54:06.875: INFO: stderr: "I0524 23:54:06.779933 1311 log.go:172] (0xc000b73130) (0xc000af2320) Create stream\nI0524 23:54:06.780013 1311 log.go:172] (0xc000b73130) (0xc000af2320) Stream added, broadcasting: 1\nI0524 23:54:06.782611 1311 log.go:172] (0xc000b73130) Reply frame received for 1\nI0524 23:54:06.782667 1311 log.go:172] (0xc000b73130) (0xc0006a9cc0) Create stream\nI0524 23:54:06.782690 1311 log.go:172] (0xc000b73130) (0xc0006a9cc0) Stream added, broadcasting: 3\nI0524 23:54:06.783620 1311 log.go:172] (0xc000b73130) Reply frame received for 3\nI0524 23:54:06.783651 1311 log.go:172] (0xc000b73130) (0xc000af23c0) Create stream\nI0524 23:54:06.783665 1311 log.go:172] (0xc000b73130) (0xc000af23c0) Stream added, broadcasting: 5\nI0524 23:54:06.784403 1311 log.go:172] (0xc000b73130) Reply frame received for 5\nI0524 23:54:06.845554 1311 log.go:172] (0xc000b73130) Data frame received for 5\nI0524 23:54:06.845575 1311 log.go:172] (0xc000af23c0) (5) Data frame handling\nI0524 23:54:06.845603 1311 log.go:172] (0xc000af23c0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0524 23:54:06.864533 1311 log.go:172] (0xc000b73130) Data frame received for 5\nI0524 23:54:06.864575 1311 log.go:172] (0xc000af23c0) (5) Data frame handling\nI0524 23:54:06.864795 1311 log.go:172] (0xc000af23c0) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0524 23:54:06.864830 1311 log.go:172] (0xc000b73130) Data frame received for 3\nI0524 23:54:06.864848 1311 log.go:172] (0xc0006a9cc0) (3) Data frame handling\nI0524 23:54:06.865376 1311 log.go:172] (0xc000b73130) Data frame received for 5\nI0524 23:54:06.865415 1311 log.go:172] (0xc000af23c0) (5) Data frame handling\nI0524 23:54:06.867162 1311 log.go:172] (0xc000b73130) Data frame received for 1\nI0524 23:54:06.867192 1311 log.go:172] (0xc000af2320) (1) Data frame handling\nI0524 23:54:06.867213 1311 log.go:172] (0xc000af2320) (1) Data frame sent\nI0524 23:54:06.867247 1311 log.go:172] (0xc000b73130) (0xc000af2320) Stream removed, broadcasting: 1\nI0524 23:54:06.867267 1311 log.go:172] (0xc000b73130) Go away received\nI0524 23:54:06.868693 1311 log.go:172] (0xc000b73130) (0xc000af2320) Stream removed, broadcasting: 1\nI0524 23:54:06.868733 1311 log.go:172] (0xc000b73130) (0xc0006a9cc0) Stream removed, broadcasting: 3\nI0524 23:54:06.868771 1311 log.go:172] (0xc000b73130) (0xc000af23c0) Stream removed, broadcasting: 5\n" May 24 23:54:06.875: INFO: stdout: "" May 24 23:54:06.876: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7827 execpod-affinity5xsf2 -- /bin/sh -x -c nc -zv -t -w 2 10.97.73.38 80' May 24 23:54:07.067: INFO: stderr: "I0524 23:54:07.004211 1331 log.go:172] (0xc00095f1e0) (0xc000ad8820) Create stream\nI0524 23:54:07.004266 1331 log.go:172] (0xc00095f1e0) (0xc000ad8820) Stream added, broadcasting: 1\nI0524 23:54:07.007898 1331 log.go:172] (0xc00095f1e0) Reply frame received for 1\nI0524 23:54:07.007929 1331 log.go:172] (0xc00095f1e0) (0xc000406d20) Create stream\nI0524 23:54:07.007940 1331 log.go:172] (0xc00095f1e0) (0xc000406d20) Stream added, broadcasting: 3\nI0524 23:54:07.008685 1331 log.go:172] (0xc00095f1e0) Reply frame received for 3\nI0524 23:54:07.008722 1331 log.go:172] (0xc00095f1e0) (0xc0005000a0) Create stream\nI0524 23:54:07.008737 1331 log.go:172] (0xc00095f1e0) (0xc0005000a0) Stream added, broadcasting: 5\nI0524 23:54:07.009740 1331 log.go:172] (0xc00095f1e0) Reply frame received for 5\nI0524 23:54:07.061077 1331 log.go:172] (0xc00095f1e0) Data frame received for 3\nI0524 23:54:07.061305 1331 log.go:172] (0xc000406d20) (3) Data frame handling\nI0524 23:54:07.061340 1331 log.go:172] (0xc00095f1e0) Data frame received for 5\nI0524 23:54:07.061352 1331 log.go:172] (0xc0005000a0) (5) Data frame handling\nI0524 23:54:07.061363 1331 log.go:172] (0xc0005000a0) (5) Data frame sent\nI0524 23:54:07.061373 1331 log.go:172] (0xc00095f1e0) Data frame received for 5\nI0524 23:54:07.061381 1331 log.go:172] (0xc0005000a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.97.73.38 80\nConnection to 10.97.73.38 80 port [tcp/http] succeeded!\nI0524 23:54:07.063143 1331 log.go:172] (0xc00095f1e0) Data frame received for 1\nI0524 23:54:07.063161 1331 log.go:172] (0xc000ad8820) (1) Data frame handling\nI0524 23:54:07.063169 1331 log.go:172] (0xc000ad8820) (1) Data frame sent\nI0524 23:54:07.063181 1331 log.go:172] (0xc00095f1e0) (0xc000ad8820) Stream removed, broadcasting: 1\nI0524 23:54:07.063401 1331 log.go:172] (0xc00095f1e0) Go away received\nI0524 23:54:07.063490 1331 log.go:172] (0xc00095f1e0) (0xc000ad8820) Stream removed, broadcasting: 1\nI0524 23:54:07.063503 1331 log.go:172] (0xc00095f1e0) (0xc000406d20) Stream removed, broadcasting: 3\nI0524 23:54:07.063509 1331 log.go:172] (0xc00095f1e0) (0xc0005000a0) Stream removed, broadcasting: 5\n" May 24 23:54:07.067: INFO: stdout: "" May 24 23:54:07.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7827 execpod-affinity5xsf2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.97.73.38:80/ ; done' May 24 23:54:07.407: INFO: stderr: "I0524 23:54:07.210123 1353 log.go:172] (0xc00003a8f0) (0xc000ba0000) Create stream\nI0524 23:54:07.210190 1353 log.go:172] (0xc00003a8f0) (0xc000ba0000) Stream added, broadcasting: 1\nI0524 23:54:07.213824 1353 log.go:172] (0xc00003a8f0) Reply frame received for 1\nI0524 23:54:07.213880 1353 log.go:172] (0xc00003a8f0) (0xc00056cd20) Create stream\nI0524 23:54:07.213921 1353 log.go:172] (0xc00003a8f0) (0xc00056cd20) Stream added, broadcasting: 3\nI0524 23:54:07.215021 1353 log.go:172] (0xc00003a8f0) Reply frame received for 3\nI0524 23:54:07.215085 1353 log.go:172] (0xc00003a8f0) (0xc000ba01e0) Create stream\nI0524 23:54:07.215102 1353 log.go:172] (0xc00003a8f0) (0xc000ba01e0) Stream added, broadcasting: 5\nI0524 23:54:07.216132 1353 log.go:172] (0xc00003a8f0) Reply frame received for 5\nI0524 23:54:07.265522 1353 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0524 23:54:07.265551 1353 log.go:172] (0xc000ba01e0) (5) Data frame handling\nI0524 23:54:07.265573 1353 log.go:172] (0xc000ba01e0) (5) Data frame sent\n+ seq 0 15\nI0524 23:54:07.275734 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.275760 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.275772 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.275795 1353 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0524 23:54:07.275804 1353 log.go:172] (0xc000ba01e0) (5) Data frame handling\nI0524 23:54:07.275816 1353 log.go:172] (0xc000ba01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.73.38:80/\nI0524 23:54:07.310280 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.310331 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.310353 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.311309 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.311349 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.311367 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.311397 1353 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0524 23:54:07.311422 1353 log.go:172] (0xc000ba01e0) (5) Data frame handling\nI0524 23:54:07.311448 1353 log.go:172] (0xc000ba01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.73.38:80/\nI0524 23:54:07.319265 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.319303 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.319324 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.319800 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.319813 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.319819 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.319828 1353 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0524 23:54:07.319833 1353 log.go:172] (0xc000ba01e0) (5) Data frame handling\nI0524 23:54:07.319837 1353 log.go:172] (0xc000ba01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.73.38:80/\nI0524 23:54:07.324310 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.324328 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.324341 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.324782 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.324805 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.324817 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.324832 1353 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0524 23:54:07.324840 1353 log.go:172] (0xc000ba01e0) (5) Data frame handling\nI0524 23:54:07.324848 1353 log.go:172] (0xc000ba01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.73.38:80/\nI0524 23:54:07.331129 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.331160 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.331190 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.331363 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.331394 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.331413 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.331436 1353 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0524 23:54:07.331449 1353 log.go:172] (0xc000ba01e0) (5) Data frame handling\nI0524 23:54:07.331468 1353 log.go:172] (0xc000ba01e0) (5) Data frame sent\nI0524 23:54:07.331482 1353 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0524 23:54:07.331492 1353 log.go:172] (0xc000ba01e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.73.38:80/\nI0524 23:54:07.331525 1353 log.go:172] (0xc000ba01e0) (5) Data frame sent\nI0524 23:54:07.347023 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.347053 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.347086 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.347411 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.347428 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.347448 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.347478 1353 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0524 23:54:07.347491 1353 log.go:172] (0xc000ba01e0) (5) Data frame handling\nI0524 23:54:07.347505 1353 log.go:172] (0xc000ba01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.73.38:80/\nI0524 23:54:07.350662 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.350687 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.350699 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.350894 1353 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0524 23:54:07.350906 1353 log.go:172] (0xc000ba01e0) (5) Data frame handling\nI0524 23:54:07.350912 1353 log.go:172] (0xc000ba01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.73.38:80/\nI0524 23:54:07.351046 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.351070 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.351087 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.358836 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.358854 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.358868 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.359204 1353 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0524 23:54:07.359230 1353 log.go:172] (0xc000ba01e0) (5) Data frame handling\nI0524 23:54:07.359237 1353 log.go:172] (0xc000ba01e0) (5) Data frame sent\nI0524 23:54:07.359265 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.73.38:80/\nI0524 23:54:07.359299 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.359316 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.363456 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.363485 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.363513 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.363925 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.363951 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.363962 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.363979 1353 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0524 23:54:07.363985 1353 log.go:172] (0xc000ba01e0) (5) Data frame handling\nI0524 23:54:07.363991 1353 log.go:172] (0xc000ba01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.73.38:80/\nI0524 23:54:07.368335 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.368354 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.368370 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.368973 1353 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0524 23:54:07.368995 1353 log.go:172] (0xc000ba01e0) (5) Data frame handling\nI0524 23:54:07.369005 1353 log.go:172] (0xc000ba01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.73.38:80/\nI0524 23:54:07.369025 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.369054 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.369079 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.373557 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.373576 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.373592 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.374282 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.374306 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.374317 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.374330 1353 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0524 23:54:07.374344 1353 log.go:172] (0xc000ba01e0) (5) Data frame handling\nI0524 23:54:07.374352 1353 log.go:172] (0xc000ba01e0) (5) Data frame sent\nI0524 23:54:07.374362 1353 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0524 23:54:07.374369 1353 log.go:172] (0xc000ba01e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.73.38:80/\nI0524 23:54:07.374428 1353 log.go:172] (0xc000ba01e0) (5) Data frame sent\nI0524 23:54:07.378475 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.378495 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.378512 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.378926 1353 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0524 23:54:07.378941 1353 log.go:172] (0xc000ba01e0) (5) Data frame handling\nI0524 23:54:07.378948 1353 log.go:172] (0xc000ba01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.73.38:80/\nI0524 23:54:07.378958 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.378964 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.378971 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.383523 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.383541 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.383558 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.384019 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.384040 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.384050 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.384069 1353 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0524 23:54:07.384086 1353 log.go:172] (0xc000ba01e0) (5) Data frame handling\nI0524 23:54:07.384101 1353 log.go:172] (0xc000ba01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.73.38:80/\nI0524 23:54:07.387496 1353 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0524 23:54:07.387519 1353 log.go:172] (0xc000ba01e0) (5) Data frame handling\nI0524 23:54:07.387530 1353 log.go:172] (0xc000ba01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.73.38:80/\nI0524 23:54:07.387563 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.387599 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.387632 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.387646 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.387660 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.387691 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.392198 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.392215 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.392229 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.392573 1353 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0524 23:54:07.392587 1353 log.go:172] (0xc000ba01e0) (5) Data frame handling\nI0524 23:54:07.392597 1353 log.go:172] (0xc000ba01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.73.38:80/\nI0524 23:54:07.392714 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.392725 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.392736 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.396292 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.396308 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.396319 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.397058 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.397074 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.397085 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.397099 1353 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0524 23:54:07.397106 1353 log.go:172] (0xc000ba01e0) (5) Data frame handling\nI0524 23:54:07.397224 1353 log.go:172] (0xc000ba01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.73.38:80/\nI0524 23:54:07.401018 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.401056 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.401103 1353 log.go:172] (0xc00056cd20) (3) Data frame sent\nI0524 23:54:07.401565 1353 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0524 23:54:07.401620 1353 log.go:172] (0xc000ba01e0) (5) Data frame handling\nI0524 23:54:07.401653 1353 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0524 23:54:07.401674 1353 log.go:172] (0xc00056cd20) (3) Data frame handling\nI0524 23:54:07.403900 1353 log.go:172] (0xc00003a8f0) Data frame received for 1\nI0524 23:54:07.403923 1353 log.go:172] (0xc000ba0000) (1) Data frame handling\nI0524 23:54:07.403952 1353 log.go:172] (0xc000ba0000) (1) Data frame sent\nI0524 23:54:07.403980 1353 log.go:172] (0xc00003a8f0) (0xc000ba0000) Stream removed, broadcasting: 1\nI0524 23:54:07.404235 1353 log.go:172] (0xc00003a8f0) Go away received\nI0524 23:54:07.404288 1353 log.go:172] (0xc00003a8f0) (0xc000ba0000) Stream removed, broadcasting: 1\nI0524 23:54:07.404325 1353 log.go:172] (0xc00003a8f0) (0xc00056cd20) Stream removed, broadcasting: 3\nI0524 23:54:07.404340 1353 log.go:172] (0xc00003a8f0) (0xc000ba01e0) Stream removed, broadcasting: 5\n" May 24 23:54:07.408: INFO: stdout: "\naffinity-clusterip-fvn2k\naffinity-clusterip-fvn2k\naffinity-clusterip-fvn2k\naffinity-clusterip-fvn2k\naffinity-clusterip-fvn2k\naffinity-clusterip-fvn2k\naffinity-clusterip-fvn2k\naffinity-clusterip-fvn2k\naffinity-clusterip-fvn2k\naffinity-clusterip-fvn2k\naffinity-clusterip-fvn2k\naffinity-clusterip-fvn2k\naffinity-clusterip-fvn2k\naffinity-clusterip-fvn2k\naffinity-clusterip-fvn2k\naffinity-clusterip-fvn2k" May 24 23:54:07.408: INFO: Received response from host: May 24 23:54:07.408: INFO: Received response from host: affinity-clusterip-fvn2k May 24 23:54:07.408: INFO: Received response from host: affinity-clusterip-fvn2k May 24 23:54:07.408: INFO: Received response from host: affinity-clusterip-fvn2k May 24 23:54:07.408: INFO: Received response from host: affinity-clusterip-fvn2k May 24 23:54:07.408: INFO: Received response from host: affinity-clusterip-fvn2k May 24 23:54:07.408: INFO: Received response from host: affinity-clusterip-fvn2k May 24 23:54:07.408: INFO: Received response from host: affinity-clusterip-fvn2k May 24 23:54:07.408: INFO: Received response from host: affinity-clusterip-fvn2k May 24 23:54:07.408: INFO: Received response from host: affinity-clusterip-fvn2k May 24 23:54:07.408: INFO: Received response from host: affinity-clusterip-fvn2k May 24 23:54:07.408: INFO: Received response from host: affinity-clusterip-fvn2k May 24 23:54:07.408: INFO: Received response from host: affinity-clusterip-fvn2k May 24 23:54:07.408: INFO: Received response from host: affinity-clusterip-fvn2k May 24 23:54:07.408: INFO: Received response from host: affinity-clusterip-fvn2k May 24 23:54:07.408: INFO: Received response from host: affinity-clusterip-fvn2k May 24 23:54:07.408: INFO: Received response from host: affinity-clusterip-fvn2k May 24 23:54:07.408: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-7827, will wait for the garbage collector to delete the pods May 24 23:54:07.517: INFO: Deleting ReplicationController affinity-clusterip took: 5.494287ms May 24 23:54:07.918: INFO: Terminating ReplicationController affinity-clusterip pods took: 400.740437ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:54:24.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7827" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:29.548 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":37,"skipped":360,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:54:24.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-e521ca13-222b-4682-bc44-5d350f68abbb STEP: Creating a pod to test consume configMaps May 24 23:54:25.087: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d27c0a39-1783-4139-a3a0-f77bd36a685d" in namespace "projected-7780" to be "Succeeded or Failed" May 24 23:54:25.105: INFO: Pod "pod-projected-configmaps-d27c0a39-1783-4139-a3a0-f77bd36a685d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.844068ms May 24 23:54:27.109: INFO: Pod "pod-projected-configmaps-d27c0a39-1783-4139-a3a0-f77bd36a685d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022050986s May 24 23:54:29.114: INFO: Pod "pod-projected-configmaps-d27c0a39-1783-4139-a3a0-f77bd36a685d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026438437s STEP: Saw pod success May 24 23:54:29.114: INFO: Pod "pod-projected-configmaps-d27c0a39-1783-4139-a3a0-f77bd36a685d" satisfied condition "Succeeded or Failed" May 24 23:54:29.117: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-d27c0a39-1783-4139-a3a0-f77bd36a685d container projected-configmap-volume-test: STEP: delete the pod May 24 23:54:29.175: INFO: Waiting for pod pod-projected-configmaps-d27c0a39-1783-4139-a3a0-f77bd36a685d to disappear May 24 23:54:29.192: INFO: Pod pod-projected-configmaps-d27c0a39-1783-4139-a3a0-f77bd36a685d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:54:29.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7780" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":38,"skipped":416,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:54:29.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:54:33.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-201" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":39,"skipped":421,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:54:33.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7727 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-7727 May 24 23:54:33.850: INFO: Found 0 stateful pods, waiting for 1 May 24 23:54:43.855: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 24 23:54:43.882: INFO: Deleting all statefulset in ns statefulset-7727 May 24 23:54:43.900: INFO: Scaling statefulset ss to 0 May 24 23:55:04.037: INFO: Waiting for statefulset status.replicas updated to 0 May 24 23:55:04.041: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:55:04.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7727" for this suite. • [SLOW TEST:30.376 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":288,"completed":40,"skipped":427,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:55:04.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 23:57:04.237: INFO: Deleting pod "var-expansion-99f82b96-1abc-4c3e-99d1-33f91a9d5311" in namespace "var-expansion-1986" May 24 23:57:04.241: INFO: Wait up to 5m0s for pod "var-expansion-99f82b96-1abc-4c3e-99d1-33f91a9d5311" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:57:08.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1986" for this suite. • [SLOW TEST:124.242 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":288,"completed":41,"skipped":433,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:57:08.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions May 24 23:57:08.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' May 24 23:57:08.660: INFO: stderr: "" May 24 23:57:08.660: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:57:08.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7704" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":288,"completed":42,"skipped":453,"failed":0} SSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:57:08.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 24 23:57:13.337: INFO: Successfully updated pod "pod-update-6f510748-87d5-4e97-95e9-c65a401b75d4" STEP: verifying the updated pod is in kubernetes May 24 23:57:13.407: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:57:13.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-616" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":288,"completed":43,"skipped":456,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:57:13.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-6f526e5f-1249-4066-8929-f2aaa394ff67 STEP: Creating a pod to test consume secrets May 24 23:57:13.499: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6ee2107b-d08c-4cb4-8317-b328b647a19a" in namespace "projected-7483" to be "Succeeded or Failed" May 24 23:57:13.582: INFO: Pod "pod-projected-secrets-6ee2107b-d08c-4cb4-8317-b328b647a19a": Phase="Pending", Reason="", readiness=false. Elapsed: 83.222543ms May 24 23:57:15.586: INFO: Pod "pod-projected-secrets-6ee2107b-d08c-4cb4-8317-b328b647a19a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087505258s May 24 23:57:17.590: INFO: Pod "pod-projected-secrets-6ee2107b-d08c-4cb4-8317-b328b647a19a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090959433s STEP: Saw pod success May 24 23:57:17.590: INFO: Pod "pod-projected-secrets-6ee2107b-d08c-4cb4-8317-b328b647a19a" satisfied condition "Succeeded or Failed" May 24 23:57:17.592: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-6ee2107b-d08c-4cb4-8317-b328b647a19a container projected-secret-volume-test: STEP: delete the pod May 24 23:57:17.814: INFO: Waiting for pod pod-projected-secrets-6ee2107b-d08c-4cb4-8317-b328b647a19a to disappear May 24 23:57:17.868: INFO: Pod pod-projected-secrets-6ee2107b-d08c-4cb4-8317-b328b647a19a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:57:17.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7483" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":44,"skipped":496,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:57:17.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 24 23:57:18.506: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 24 23:57:20.535: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961438, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961438, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961438, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961438, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 23:57:23.552: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 23:57:23.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:57:24.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2802" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.029 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":288,"completed":45,"skipped":501,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:57:24.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 24 23:57:24.952: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:57:32.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6418" for this suite. • [SLOW TEST:7.563 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":288,"completed":46,"skipped":510,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:57:32.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 23:57:33.032: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created May 24 23:57:35.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961453, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961453, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961453, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961453, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 23:57:38.116: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:57:38.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9917" for this suite. STEP: Destroying namespace "webhook-9917-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.145 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":288,"completed":47,"skipped":512,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:57:38.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 23:57:38.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 24 23:57:39.441: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-24T23:57:39Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-24T23:57:39Z]] name:name1 resourceVersion:7410324 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:3a83142a-7f94-4b43-bcbf-4f7057385e42] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 24 23:57:49.448: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-24T23:57:49Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-24T23:57:49Z]] name:name2 resourceVersion:7410376 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:9dcc674f-fb82-473f-aaf6-74c57dff462c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 24 23:57:59.457: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-24T23:57:39Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-24T23:57:59Z]] name:name1 resourceVersion:7410406 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:3a83142a-7f94-4b43-bcbf-4f7057385e42] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 24 23:58:09.465: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-24T23:57:49Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-24T23:58:09Z]] name:name2 resourceVersion:7410436 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:9dcc674f-fb82-473f-aaf6-74c57dff462c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 24 23:58:19.473: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-24T23:57:39Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-24T23:57:59Z]] name:name1 resourceVersion:7410466 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:3a83142a-7f94-4b43-bcbf-4f7057385e42] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 24 23:58:29.482: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-24T23:57:49Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-24T23:58:09Z]] name:name2 resourceVersion:7410495 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:9dcc674f-fb82-473f-aaf6-74c57dff462c] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 23:58:39.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-2874" for this suite. • [SLOW TEST:61.387 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":288,"completed":48,"skipped":513,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 23:58:40.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6733 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 24 23:58:40.134: INFO: Found 0 stateful pods, waiting for 3 May 24 23:58:50.334: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 24 23:58:50.334: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 24 23:58:50.334: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 24 23:59:00.140: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 24 23:59:00.141: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 24 23:59:00.141: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 24 23:59:00.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6733 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 23:59:00.432: INFO: stderr: "I0524 23:59:00.312116 1394 log.go:172] (0xc000566dc0) (0xc000aac320) Create stream\nI0524 23:59:00.312175 1394 log.go:172] (0xc000566dc0) (0xc000aac320) Stream added, broadcasting: 1\nI0524 23:59:00.316597 1394 log.go:172] (0xc000566dc0) Reply frame received for 1\nI0524 23:59:00.316631 1394 log.go:172] (0xc000566dc0) (0xc00070e000) Create stream\nI0524 23:59:00.316641 1394 log.go:172] (0xc000566dc0) (0xc00070e000) Stream added, broadcasting: 3\nI0524 23:59:00.317529 1394 log.go:172] (0xc000566dc0) Reply frame received for 3\nI0524 23:59:00.317550 1394 log.go:172] (0xc000566dc0) (0xc0004f8e60) Create stream\nI0524 23:59:00.317558 1394 log.go:172] (0xc000566dc0) (0xc0004f8e60) Stream added, broadcasting: 5\nI0524 23:59:00.318267 1394 log.go:172] (0xc000566dc0) Reply frame received for 5\nI0524 23:59:00.392559 1394 log.go:172] (0xc000566dc0) Data frame received for 5\nI0524 23:59:00.392596 1394 log.go:172] (0xc0004f8e60) (5) Data frame handling\nI0524 23:59:00.392622 1394 log.go:172] (0xc0004f8e60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 23:59:00.424639 1394 log.go:172] (0xc000566dc0) Data frame received for 5\nI0524 23:59:00.424675 1394 log.go:172] (0xc0004f8e60) (5) Data frame handling\nI0524 23:59:00.424696 1394 log.go:172] (0xc000566dc0) Data frame received for 3\nI0524 23:59:00.424705 1394 log.go:172] (0xc00070e000) (3) Data frame handling\nI0524 23:59:00.424721 1394 log.go:172] (0xc00070e000) (3) Data frame sent\nI0524 23:59:00.424739 1394 log.go:172] (0xc000566dc0) Data frame received for 3\nI0524 23:59:00.424750 1394 log.go:172] (0xc00070e000) (3) Data frame handling\nI0524 23:59:00.426732 1394 log.go:172] (0xc000566dc0) Data frame received for 1\nI0524 23:59:00.426760 1394 log.go:172] (0xc000aac320) (1) Data frame handling\nI0524 23:59:00.426778 1394 log.go:172] (0xc000aac320) (1) Data frame sent\nI0524 23:59:00.426801 1394 log.go:172] (0xc000566dc0) (0xc000aac320) Stream removed, broadcasting: 1\nI0524 23:59:00.426831 1394 log.go:172] (0xc000566dc0) Go away received\nI0524 23:59:00.427256 1394 log.go:172] (0xc000566dc0) (0xc000aac320) Stream removed, broadcasting: 1\nI0524 23:59:00.427290 1394 log.go:172] (0xc000566dc0) (0xc00070e000) Stream removed, broadcasting: 3\nI0524 23:59:00.427304 1394 log.go:172] (0xc000566dc0) (0xc0004f8e60) Stream removed, broadcasting: 5\n" May 24 23:59:00.433: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 23:59:00.433: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 24 23:59:10.462: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 24 23:59:20.516: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6733 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 23:59:20.804: INFO: stderr: "I0524 23:59:20.692331 1416 log.go:172] (0xc000970000) (0xc000532c80) Create stream\nI0524 23:59:20.692393 1416 log.go:172] (0xc000970000) (0xc000532c80) Stream added, broadcasting: 1\nI0524 23:59:20.694641 1416 log.go:172] (0xc000970000) Reply frame received for 1\nI0524 23:59:20.694694 1416 log.go:172] (0xc000970000) (0xc0004c32c0) Create stream\nI0524 23:59:20.694714 1416 log.go:172] (0xc000970000) (0xc0004c32c0) Stream added, broadcasting: 3\nI0524 23:59:20.695781 1416 log.go:172] (0xc000970000) Reply frame received for 3\nI0524 23:59:20.695826 1416 log.go:172] (0xc000970000) (0xc000151680) Create stream\nI0524 23:59:20.695869 1416 log.go:172] (0xc000970000) (0xc000151680) Stream added, broadcasting: 5\nI0524 23:59:20.696747 1416 log.go:172] (0xc000970000) Reply frame received for 5\nI0524 23:59:20.797567 1416 log.go:172] (0xc000970000) Data frame received for 5\nI0524 23:59:20.797598 1416 log.go:172] (0xc000970000) Data frame received for 3\nI0524 23:59:20.797623 1416 log.go:172] (0xc0004c32c0) (3) Data frame handling\nI0524 23:59:20.797637 1416 log.go:172] (0xc0004c32c0) (3) Data frame sent\nI0524 23:59:20.797647 1416 log.go:172] (0xc000970000) Data frame received for 3\nI0524 23:59:20.797656 1416 log.go:172] (0xc0004c32c0) (3) Data frame handling\nI0524 23:59:20.797689 1416 log.go:172] (0xc000151680) (5) Data frame handling\nI0524 23:59:20.797704 1416 log.go:172] (0xc000151680) (5) Data frame sent\nI0524 23:59:20.797716 1416 log.go:172] (0xc000970000) Data frame received for 5\nI0524 23:59:20.797737 1416 log.go:172] (0xc000151680) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0524 23:59:20.799164 1416 log.go:172] (0xc000970000) Data frame received for 1\nI0524 23:59:20.799177 1416 log.go:172] (0xc000532c80) (1) Data frame handling\nI0524 23:59:20.799184 1416 log.go:172] (0xc000532c80) (1) Data frame sent\nI0524 23:59:20.799197 1416 log.go:172] (0xc000970000) (0xc000532c80) Stream removed, broadcasting: 1\nI0524 23:59:20.799212 1416 log.go:172] (0xc000970000) Go away received\nI0524 23:59:20.799493 1416 log.go:172] (0xc000970000) (0xc000532c80) Stream removed, broadcasting: 1\nI0524 23:59:20.799510 1416 log.go:172] (0xc000970000) (0xc0004c32c0) Stream removed, broadcasting: 3\nI0524 23:59:20.799518 1416 log.go:172] (0xc000970000) (0xc000151680) Stream removed, broadcasting: 5\n" May 24 23:59:20.804: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 23:59:20.804: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 23:59:30.826: INFO: Waiting for StatefulSet statefulset-6733/ss2 to complete update May 24 23:59:30.826: INFO: Waiting for Pod statefulset-6733/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 24 23:59:30.826: INFO: Waiting for Pod statefulset-6733/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 24 23:59:30.826: INFO: Waiting for Pod statefulset-6733/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 24 23:59:40.835: INFO: Waiting for StatefulSet statefulset-6733/ss2 to complete update May 24 23:59:40.836: INFO: Waiting for Pod statefulset-6733/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 24 23:59:40.836: INFO: Waiting for Pod statefulset-6733/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 24 23:59:50.835: INFO: Waiting for StatefulSet statefulset-6733/ss2 to complete update May 24 23:59:50.835: INFO: Waiting for Pod statefulset-6733/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 25 00:00:00.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6733 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 00:00:01.093: INFO: stderr: "I0525 00:00:00.964554 1437 log.go:172] (0xc00003a420) (0xc000301720) Create stream\nI0525 00:00:00.964620 1437 log.go:172] (0xc00003a420) (0xc000301720) Stream added, broadcasting: 1\nI0525 00:00:00.967759 1437 log.go:172] (0xc00003a420) Reply frame received for 1\nI0525 00:00:00.967791 1437 log.go:172] (0xc00003a420) (0xc0005b6f00) Create stream\nI0525 00:00:00.967801 1437 log.go:172] (0xc00003a420) (0xc0005b6f00) Stream added, broadcasting: 3\nI0525 00:00:00.969420 1437 log.go:172] (0xc00003a420) Reply frame received for 3\nI0525 00:00:00.969467 1437 log.go:172] (0xc00003a420) (0xc0001f4460) Create stream\nI0525 00:00:00.969484 1437 log.go:172] (0xc00003a420) (0xc0001f4460) Stream added, broadcasting: 5\nI0525 00:00:00.970466 1437 log.go:172] (0xc00003a420) Reply frame received for 5\nI0525 00:00:01.050411 1437 log.go:172] (0xc00003a420) Data frame received for 5\nI0525 00:00:01.050436 1437 log.go:172] (0xc0001f4460) (5) Data frame handling\nI0525 00:00:01.050452 1437 log.go:172] (0xc0001f4460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 00:00:01.085071 1437 log.go:172] (0xc00003a420) Data frame received for 3\nI0525 00:00:01.085101 1437 log.go:172] (0xc0005b6f00) (3) Data frame handling\nI0525 00:00:01.085233 1437 log.go:172] (0xc0005b6f00) (3) Data frame sent\nI0525 00:00:01.085830 1437 log.go:172] (0xc00003a420) Data frame received for 3\nI0525 00:00:01.085857 1437 log.go:172] (0xc0005b6f00) (3) Data frame handling\nI0525 00:00:01.085898 1437 log.go:172] (0xc00003a420) Data frame received for 5\nI0525 00:00:01.085950 1437 log.go:172] (0xc0001f4460) (5) Data frame handling\nI0525 00:00:01.087601 1437 log.go:172] (0xc00003a420) Data frame received for 1\nI0525 00:00:01.087641 1437 log.go:172] (0xc000301720) (1) Data frame handling\nI0525 00:00:01.087675 1437 log.go:172] (0xc000301720) (1) Data frame sent\nI0525 00:00:01.087699 1437 log.go:172] (0xc00003a420) (0xc000301720) Stream removed, broadcasting: 1\nI0525 00:00:01.087726 1437 log.go:172] (0xc00003a420) Go away received\nI0525 00:00:01.088138 1437 log.go:172] (0xc00003a420) (0xc000301720) Stream removed, broadcasting: 1\nI0525 00:00:01.088164 1437 log.go:172] (0xc00003a420) (0xc0005b6f00) Stream removed, broadcasting: 3\nI0525 00:00:01.088179 1437 log.go:172] (0xc00003a420) (0xc0001f4460) Stream removed, broadcasting: 5\n" May 25 00:00:01.094: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 00:00:01.094: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 00:00:11.140: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 25 00:00:21.193: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6733 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 00:00:21.434: INFO: stderr: "I0525 00:00:21.339316 1459 log.go:172] (0xc0009c02c0) (0xc000285720) Create stream\nI0525 00:00:21.339368 1459 log.go:172] (0xc0009c02c0) (0xc000285720) Stream added, broadcasting: 1\nI0525 00:00:21.341908 1459 log.go:172] (0xc0009c02c0) Reply frame received for 1\nI0525 00:00:21.341963 1459 log.go:172] (0xc0009c02c0) (0xc000674460) Create stream\nI0525 00:00:21.341979 1459 log.go:172] (0xc0009c02c0) (0xc000674460) Stream added, broadcasting: 3\nI0525 00:00:21.342951 1459 log.go:172] (0xc0009c02c0) Reply frame received for 3\nI0525 00:00:21.343005 1459 log.go:172] (0xc0009c02c0) (0xc00064a140) Create stream\nI0525 00:00:21.343014 1459 log.go:172] (0xc0009c02c0) (0xc00064a140) Stream added, broadcasting: 5\nI0525 00:00:21.344020 1459 log.go:172] (0xc0009c02c0) Reply frame received for 5\nI0525 00:00:21.427393 1459 log.go:172] (0xc0009c02c0) Data frame received for 5\nI0525 00:00:21.427428 1459 log.go:172] (0xc0009c02c0) Data frame received for 3\nI0525 00:00:21.427459 1459 log.go:172] (0xc000674460) (3) Data frame handling\nI0525 00:00:21.427479 1459 log.go:172] (0xc000674460) (3) Data frame sent\nI0525 00:00:21.427512 1459 log.go:172] (0xc0009c02c0) Data frame received for 3\nI0525 00:00:21.427526 1459 log.go:172] (0xc000674460) (3) Data frame handling\nI0525 00:00:21.427537 1459 log.go:172] (0xc00064a140) (5) Data frame handling\nI0525 00:00:21.427547 1459 log.go:172] (0xc00064a140) (5) Data frame sent\nI0525 00:00:21.427554 1459 log.go:172] (0xc0009c02c0) Data frame received for 5\nI0525 00:00:21.427561 1459 log.go:172] (0xc00064a140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0525 00:00:21.428842 1459 log.go:172] (0xc0009c02c0) Data frame received for 1\nI0525 00:00:21.428866 1459 log.go:172] (0xc000285720) (1) Data frame handling\nI0525 00:00:21.428879 1459 log.go:172] (0xc000285720) (1) Data frame sent\nI0525 00:00:21.428909 1459 log.go:172] (0xc0009c02c0) (0xc000285720) Stream removed, broadcasting: 1\nI0525 00:00:21.429051 1459 log.go:172] (0xc0009c02c0) Go away received\nI0525 00:00:21.429356 1459 log.go:172] (0xc0009c02c0) (0xc000285720) Stream removed, broadcasting: 1\nI0525 00:00:21.429375 1459 log.go:172] (0xc0009c02c0) (0xc000674460) Stream removed, broadcasting: 3\nI0525 00:00:21.429382 1459 log.go:172] (0xc0009c02c0) (0xc00064a140) Stream removed, broadcasting: 5\n" May 25 00:00:21.434: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 00:00:21.434: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 00:00:31.489: INFO: Waiting for StatefulSet statefulset-6733/ss2 to complete update May 25 00:00:31.489: INFO: Waiting for Pod statefulset-6733/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 25 00:00:31.489: INFO: Waiting for Pod statefulset-6733/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 25 00:00:31.489: INFO: Waiting for Pod statefulset-6733/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 25 00:00:41.498: INFO: Waiting for StatefulSet statefulset-6733/ss2 to complete update May 25 00:00:41.498: INFO: Waiting for Pod statefulset-6733/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 25 00:00:41.499: INFO: Waiting for Pod statefulset-6733/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 25 00:00:51.498: INFO: Waiting for StatefulSet statefulset-6733/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 25 00:01:01.498: INFO: Deleting all statefulset in ns statefulset-6733 May 25 00:01:01.502: INFO: Scaling statefulset ss2 to 0 May 25 00:01:31.523: INFO: Waiting for statefulset status.replicas updated to 0 May 25 00:01:31.526: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:01:31.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6733" for this suite. • [SLOW TEST:171.565 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":288,"completed":49,"skipped":521,"failed":0} [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:01:31.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9961 STEP: creating service affinity-nodeport-transition in namespace services-9961 STEP: creating replication controller affinity-nodeport-transition in namespace services-9961 I0525 00:01:31.855557 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-9961, replica count: 3 I0525 00:01:34.905947 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 00:01:37.906233 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 00:01:37.917: INFO: Creating new exec pod May 25 00:01:42.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9961 execpod-affinityztdpw -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' May 25 00:01:43.199: INFO: stderr: "I0525 00:01:43.111153 1482 log.go:172] (0xc000a151e0) (0xc0006b50e0) Create stream\nI0525 00:01:43.111220 1482 log.go:172] (0xc000a151e0) (0xc0006b50e0) Stream added, broadcasting: 1\nI0525 00:01:43.116024 1482 log.go:172] (0xc000a151e0) Reply frame received for 1\nI0525 00:01:43.116083 1482 log.go:172] (0xc000a151e0) (0xc000325c20) Create stream\nI0525 00:01:43.116100 1482 log.go:172] (0xc000a151e0) (0xc000325c20) Stream added, broadcasting: 3\nI0525 00:01:43.117062 1482 log.go:172] (0xc000a151e0) Reply frame received for 3\nI0525 00:01:43.117095 1482 log.go:172] (0xc000a151e0) (0xc0005f6be0) Create stream\nI0525 00:01:43.117105 1482 log.go:172] (0xc000a151e0) (0xc0005f6be0) Stream added, broadcasting: 5\nI0525 00:01:43.118187 1482 log.go:172] (0xc000a151e0) Reply frame received for 5\nI0525 00:01:43.190322 1482 log.go:172] (0xc000a151e0) Data frame received for 5\nI0525 00:01:43.190360 1482 log.go:172] (0xc0005f6be0) (5) Data frame handling\nI0525 00:01:43.190388 1482 log.go:172] (0xc0005f6be0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0525 00:01:43.191004 1482 log.go:172] (0xc000a151e0) Data frame received for 5\nI0525 00:01:43.191031 1482 log.go:172] (0xc0005f6be0) (5) Data frame handling\nI0525 00:01:43.191058 1482 log.go:172] (0xc0005f6be0) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0525 00:01:43.191199 1482 log.go:172] (0xc000a151e0) Data frame received for 5\nI0525 00:01:43.191216 1482 log.go:172] (0xc0005f6be0) (5) Data frame handling\nI0525 00:01:43.191380 1482 log.go:172] (0xc000a151e0) Data frame received for 3\nI0525 00:01:43.191405 1482 log.go:172] (0xc000325c20) (3) Data frame handling\nI0525 00:01:43.192986 1482 log.go:172] (0xc000a151e0) Data frame received for 1\nI0525 00:01:43.192998 1482 log.go:172] (0xc0006b50e0) (1) Data frame handling\nI0525 00:01:43.193010 1482 log.go:172] (0xc0006b50e0) (1) Data frame sent\nI0525 00:01:43.193020 1482 log.go:172] (0xc000a151e0) (0xc0006b50e0) Stream removed, broadcasting: 1\nI0525 00:01:43.193399 1482 log.go:172] (0xc000a151e0) Go away received\nI0525 00:01:43.193584 1482 log.go:172] (0xc000a151e0) (0xc0006b50e0) Stream removed, broadcasting: 1\nI0525 00:01:43.193605 1482 log.go:172] (0xc000a151e0) (0xc000325c20) Stream removed, broadcasting: 3\nI0525 00:01:43.193615 1482 log.go:172] (0xc000a151e0) (0xc0005f6be0) Stream removed, broadcasting: 5\n" May 25 00:01:43.199: INFO: stdout: "" May 25 00:01:43.200: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9961 execpod-affinityztdpw -- /bin/sh -x -c nc -zv -t -w 2 10.102.233.80 80' May 25 00:01:43.400: INFO: stderr: "I0525 00:01:43.333934 1502 log.go:172] (0xc00003ae70) (0xc0006c8e60) Create stream\nI0525 00:01:43.333995 1502 log.go:172] (0xc00003ae70) (0xc0006c8e60) Stream added, broadcasting: 1\nI0525 00:01:43.337077 1502 log.go:172] (0xc00003ae70) Reply frame received for 1\nI0525 00:01:43.337307 1502 log.go:172] (0xc00003ae70) (0xc00043ec80) Create stream\nI0525 00:01:43.337339 1502 log.go:172] (0xc00003ae70) (0xc00043ec80) Stream added, broadcasting: 3\nI0525 00:01:43.338421 1502 log.go:172] (0xc00003ae70) Reply frame received for 3\nI0525 00:01:43.338441 1502 log.go:172] (0xc00003ae70) (0xc0000ddae0) Create stream\nI0525 00:01:43.338447 1502 log.go:172] (0xc00003ae70) (0xc0000ddae0) Stream added, broadcasting: 5\nI0525 00:01:43.339609 1502 log.go:172] (0xc00003ae70) Reply frame received for 5\nI0525 00:01:43.392951 1502 log.go:172] (0xc00003ae70) Data frame received for 5\nI0525 00:01:43.392998 1502 log.go:172] (0xc0000ddae0) (5) Data frame handling\nI0525 00:01:43.393017 1502 log.go:172] (0xc0000ddae0) (5) Data frame sent\nI0525 00:01:43.393036 1502 log.go:172] (0xc00003ae70) Data frame received for 5\nI0525 00:01:43.393048 1502 log.go:172] (0xc0000ddae0) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.233.80 80\nConnection to 10.102.233.80 80 port [tcp/http] succeeded!\nI0525 00:01:43.393074 1502 log.go:172] (0xc00003ae70) Data frame received for 3\nI0525 00:01:43.393098 1502 log.go:172] (0xc00043ec80) (3) Data frame handling\nI0525 00:01:43.394619 1502 log.go:172] (0xc00003ae70) Data frame received for 1\nI0525 00:01:43.394659 1502 log.go:172] (0xc0006c8e60) (1) Data frame handling\nI0525 00:01:43.394692 1502 log.go:172] (0xc0006c8e60) (1) Data frame sent\nI0525 00:01:43.394721 1502 log.go:172] (0xc00003ae70) (0xc0006c8e60) Stream removed, broadcasting: 1\nI0525 00:01:43.394746 1502 log.go:172] (0xc00003ae70) Go away received\nI0525 00:01:43.395076 1502 log.go:172] (0xc00003ae70) (0xc0006c8e60) Stream removed, broadcasting: 1\nI0525 00:01:43.395103 1502 log.go:172] (0xc00003ae70) (0xc00043ec80) Stream removed, broadcasting: 3\nI0525 00:01:43.395121 1502 log.go:172] (0xc00003ae70) (0xc0000ddae0) Stream removed, broadcasting: 5\n" May 25 00:01:43.400: INFO: stdout: "" May 25 00:01:43.400: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9961 execpod-affinityztdpw -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32564' May 25 00:01:43.638: INFO: stderr: "I0525 00:01:43.555468 1523 log.go:172] (0xc0008f66e0) (0xc000305ea0) Create stream\nI0525 00:01:43.555526 1523 log.go:172] (0xc0008f66e0) (0xc000305ea0) Stream added, broadcasting: 1\nI0525 00:01:43.558738 1523 log.go:172] (0xc0008f66e0) Reply frame received for 1\nI0525 00:01:43.558787 1523 log.go:172] (0xc0008f66e0) (0xc000137f40) Create stream\nI0525 00:01:43.558802 1523 log.go:172] (0xc0008f66e0) (0xc000137f40) Stream added, broadcasting: 3\nI0525 00:01:43.560114 1523 log.go:172] (0xc0008f66e0) Reply frame received for 3\nI0525 00:01:43.560167 1523 log.go:172] (0xc0008f66e0) (0xc00068ce60) Create stream\nI0525 00:01:43.560182 1523 log.go:172] (0xc0008f66e0) (0xc00068ce60) Stream added, broadcasting: 5\nI0525 00:01:43.561350 1523 log.go:172] (0xc0008f66e0) Reply frame received for 5\nI0525 00:01:43.630764 1523 log.go:172] (0xc0008f66e0) Data frame received for 3\nI0525 00:01:43.630799 1523 log.go:172] (0xc000137f40) (3) Data frame handling\nI0525 00:01:43.630834 1523 log.go:172] (0xc0008f66e0) Data frame received for 5\nI0525 00:01:43.630874 1523 log.go:172] (0xc00068ce60) (5) Data frame handling\nI0525 00:01:43.630903 1523 log.go:172] (0xc00068ce60) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 32564\nConnection to 172.17.0.13 32564 port [tcp/32564] succeeded!\nI0525 00:01:43.630924 1523 log.go:172] (0xc0008f66e0) Data frame received for 5\nI0525 00:01:43.630964 1523 log.go:172] (0xc00068ce60) (5) Data frame handling\nI0525 00:01:43.632937 1523 log.go:172] (0xc0008f66e0) Data frame received for 1\nI0525 00:01:43.632967 1523 log.go:172] (0xc000305ea0) (1) Data frame handling\nI0525 00:01:43.632984 1523 log.go:172] (0xc000305ea0) (1) Data frame sent\nI0525 00:01:43.633005 1523 log.go:172] (0xc0008f66e0) (0xc000305ea0) Stream removed, broadcasting: 1\nI0525 00:01:43.633041 1523 log.go:172] (0xc0008f66e0) Go away received\nI0525 00:01:43.633571 1523 log.go:172] (0xc0008f66e0) (0xc000305ea0) Stream removed, broadcasting: 1\nI0525 00:01:43.633595 1523 log.go:172] (0xc0008f66e0) (0xc000137f40) Stream removed, broadcasting: 3\nI0525 00:01:43.633608 1523 log.go:172] (0xc0008f66e0) (0xc00068ce60) Stream removed, broadcasting: 5\n" May 25 00:01:43.638: INFO: stdout: "" May 25 00:01:43.638: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9961 execpod-affinityztdpw -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32564' May 25 00:01:43.851: INFO: stderr: "I0525 00:01:43.778397 1543 log.go:172] (0xc0009b4dc0) (0xc000a7a140) Create stream\nI0525 00:01:43.778509 1543 log.go:172] (0xc0009b4dc0) (0xc000a7a140) Stream added, broadcasting: 1\nI0525 00:01:43.787662 1543 log.go:172] (0xc0009b4dc0) Reply frame received for 1\nI0525 00:01:43.787725 1543 log.go:172] (0xc0009b4dc0) (0xc0003c4460) Create stream\nI0525 00:01:43.787744 1543 log.go:172] (0xc0009b4dc0) (0xc0003c4460) Stream added, broadcasting: 3\nI0525 00:01:43.789061 1543 log.go:172] (0xc0009b4dc0) Reply frame received for 3\nI0525 00:01:43.789240 1543 log.go:172] (0xc0009b4dc0) (0xc0006aef00) Create stream\nI0525 00:01:43.789326 1543 log.go:172] (0xc0009b4dc0) (0xc0006aef00) Stream added, broadcasting: 5\nI0525 00:01:43.790476 1543 log.go:172] (0xc0009b4dc0) Reply frame received for 5\nI0525 00:01:43.845756 1543 log.go:172] (0xc0009b4dc0) Data frame received for 3\nI0525 00:01:43.845789 1543 log.go:172] (0xc0003c4460) (3) Data frame handling\nI0525 00:01:43.845815 1543 log.go:172] (0xc0009b4dc0) Data frame received for 5\nI0525 00:01:43.845832 1543 log.go:172] (0xc0006aef00) (5) Data frame handling\nI0525 00:01:43.845843 1543 log.go:172] (0xc0006aef00) (5) Data frame sent\nI0525 00:01:43.845851 1543 log.go:172] (0xc0009b4dc0) Data frame received for 5\nI0525 00:01:43.845855 1543 log.go:172] (0xc0006aef00) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32564\nConnection to 172.17.0.12 32564 port [tcp/32564] succeeded!\nI0525 00:01:43.847159 1543 log.go:172] (0xc0009b4dc0) Data frame received for 1\nI0525 00:01:43.847179 1543 log.go:172] (0xc000a7a140) (1) Data frame handling\nI0525 00:01:43.847192 1543 log.go:172] (0xc000a7a140) (1) Data frame sent\nI0525 00:01:43.847207 1543 log.go:172] (0xc0009b4dc0) (0xc000a7a140) Stream removed, broadcasting: 1\nI0525 00:01:43.847229 1543 log.go:172] (0xc0009b4dc0) Go away received\nI0525 00:01:43.847700 1543 log.go:172] (0xc0009b4dc0) (0xc000a7a140) Stream removed, broadcasting: 1\nI0525 00:01:43.847721 1543 log.go:172] (0xc0009b4dc0) (0xc0003c4460) Stream removed, broadcasting: 3\nI0525 00:01:43.847735 1543 log.go:172] (0xc0009b4dc0) (0xc0006aef00) Stream removed, broadcasting: 5\n" May 25 00:01:43.852: INFO: stdout: "" May 25 00:01:43.860: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9961 execpod-affinityztdpw -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:32564/ ; done' May 25 00:01:44.174: INFO: stderr: "I0525 00:01:44.020408 1563 log.go:172] (0xc00041adc0) (0xc000521680) Create stream\nI0525 00:01:44.020483 1563 log.go:172] (0xc00041adc0) (0xc000521680) Stream added, broadcasting: 1\nI0525 00:01:44.023622 1563 log.go:172] (0xc00041adc0) Reply frame received for 1\nI0525 00:01:44.023664 1563 log.go:172] (0xc00041adc0) (0xc0004945a0) Create stream\nI0525 00:01:44.023692 1563 log.go:172] (0xc00041adc0) (0xc0004945a0) Stream added, broadcasting: 3\nI0525 00:01:44.024568 1563 log.go:172] (0xc00041adc0) Reply frame received for 3\nI0525 00:01:44.024591 1563 log.go:172] (0xc00041adc0) (0xc000495540) Create stream\nI0525 00:01:44.024600 1563 log.go:172] (0xc00041adc0) (0xc000495540) Stream added, broadcasting: 5\nI0525 00:01:44.025797 1563 log.go:172] (0xc00041adc0) Reply frame received for 5\nI0525 00:01:44.079037 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.079082 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.079097 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.079124 1563 log.go:172] (0xc00041adc0) Data frame received for 5\nI0525 00:01:44.079135 1563 log.go:172] (0xc000495540) (5) Data frame handling\nI0525 00:01:44.079148 1563 log.go:172] (0xc000495540) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.083779 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.083820 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.083858 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.084140 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.084164 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.084191 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.084210 1563 log.go:172] (0xc00041adc0) Data frame received for 5\nI0525 00:01:44.084220 1563 log.go:172] (0xc000495540) (5) Data frame handling\nI0525 00:01:44.084243 1563 log.go:172] (0xc000495540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.087979 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.088004 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.088023 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.088421 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.088461 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.088475 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.088492 1563 log.go:172] (0xc00041adc0) Data frame received for 5\nI0525 00:01:44.088502 1563 log.go:172] (0xc000495540) (5) Data frame handling\nI0525 00:01:44.088513 1563 log.go:172] (0xc000495540) (5) Data frame sent\nI0525 00:01:44.088527 1563 log.go:172] (0xc00041adc0) Data frame received for 5\nI0525 00:01:44.088539 1563 log.go:172] (0xc000495540) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.088602 1563 log.go:172] (0xc000495540) (5) Data frame sent\nI0525 00:01:44.096694 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.096716 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.096728 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.097562 1563 log.go:172] (0xc00041adc0) Data frame received for 5\nI0525 00:01:44.097589 1563 log.go:172] (0xc000495540) (5) Data frame handling\nI0525 00:01:44.097602 1563 log.go:172] (0xc000495540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.097619 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.097630 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.097641 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.100593 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.100612 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.100643 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.100835 1563 log.go:172] (0xc00041adc0) Data frame received for 5\nI0525 00:01:44.100851 1563 log.go:172] (0xc000495540) (5) Data frame handling\nI0525 00:01:44.100863 1563 log.go:172] (0xc000495540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.100970 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.100999 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.101018 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.108224 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.108243 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.108263 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.108670 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.108695 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.108709 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.108733 1563 log.go:172] (0xc00041adc0) Data frame received for 5\nI0525 00:01:44.108759 1563 log.go:172] (0xc000495540) (5) Data frame handling\nI0525 00:01:44.108785 1563 log.go:172] (0xc000495540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.112868 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.112891 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.112924 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.113472 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.113490 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.113500 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.113521 1563 log.go:172] (0xc00041adc0) Data frame received for 5\nI0525 00:01:44.113541 1563 log.go:172] (0xc000495540) (5) Data frame handling\nI0525 00:01:44.113552 1563 log.go:172] (0xc000495540) (5) Data frame sent\nI0525 00:01:44.113563 1563 log.go:172] (0xc00041adc0) Data frame received for 5\nI0525 00:01:44.113580 1563 log.go:172] (0xc000495540) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.113608 1563 log.go:172] (0xc000495540) (5) Data frame sent\nI0525 00:01:44.118065 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.118096 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.118118 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.118952 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.118981 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.118996 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.119023 1563 log.go:172] (0xc00041adc0) Data frame received for 5\nI0525 00:01:44.119039 1563 log.go:172] (0xc000495540) (5) Data frame handling\nI0525 00:01:44.119055 1563 log.go:172] (0xc000495540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.124368 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.124398 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.124417 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.124778 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.124798 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.124811 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.124855 1563 log.go:172] (0xc00041adc0) Data frame received for 5\nI0525 00:01:44.124888 1563 log.go:172] (0xc000495540) (5) Data frame handling\nI0525 00:01:44.124918 1563 log.go:172] (0xc000495540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.132721 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.132763 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.132801 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.133017 1563 log.go:172] (0xc00041adc0) Data frame received for 5\nI0525 00:01:44.133069 1563 log.go:172] (0xc000495540) (5) Data frame handling\nI0525 00:01:44.133095 1563 log.go:172] (0xc000495540) (5) Data frame sent\nI0525 00:01:44.133374 1563 log.go:172] (0xc00041adc0) Data frame received for 5\nI0525 00:01:44.133411 1563 log.go:172] (0xc000495540) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.133456 1563 log.go:172] (0xc000495540) (5) Data frame sent\nI0525 00:01:44.133481 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.133507 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.133531 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.137480 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.137510 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.137530 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.137552 1563 log.go:172] (0xc00041adc0) Data frame received for 5\nI0525 00:01:44.137585 1563 log.go:172] (0xc000495540) (5) Data frame handling\nI0525 00:01:44.137603 1563 log.go:172] (0xc000495540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.137619 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.137628 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.137643 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.140936 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.140967 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.141001 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.141761 1563 log.go:172] (0xc00041adc0) Data frame received for 5\nI0525 00:01:44.141789 1563 log.go:172] (0xc000495540) (5) Data frame handling\nI0525 00:01:44.141809 1563 log.go:172] (0xc000495540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.141841 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.141865 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.141885 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.146239 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.146257 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.146271 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.146996 1563 log.go:172] (0xc00041adc0) Data frame received for 5\nI0525 00:01:44.147020 1563 log.go:172] (0xc000495540) (5) Data frame handling\nI0525 00:01:44.147037 1563 log.go:172] (0xc000495540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.147071 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.147085 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.147102 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.152274 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.152297 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.152315 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.152899 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.152942 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.152965 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.152995 1563 log.go:172] (0xc00041adc0) Data frame received for 5\nI0525 00:01:44.153017 1563 log.go:172] (0xc000495540) (5) Data frame handling\nI0525 00:01:44.153046 1563 log.go:172] (0xc000495540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.156381 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.156417 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.156460 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.156758 1563 log.go:172] (0xc00041adc0) Data frame received for 5\nI0525 00:01:44.156778 1563 log.go:172] (0xc000495540) (5) Data frame handling\nI0525 00:01:44.156800 1563 log.go:172] (0xc000495540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.156922 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.156941 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.156959 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.161945 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.161973 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.161992 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.162246 1563 log.go:172] (0xc00041adc0) Data frame received for 5\nI0525 00:01:44.162275 1563 log.go:172] (0xc000495540) (5) Data frame handling\nI0525 00:01:44.162299 1563 log.go:172] (0xc000495540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.162380 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.162414 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.162438 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.166263 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.166290 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.166309 1563 log.go:172] (0xc0004945a0) (3) Data frame sent\nI0525 00:01:44.166870 1563 log.go:172] (0xc00041adc0) Data frame received for 3\nI0525 00:01:44.166910 1563 log.go:172] (0xc0004945a0) (3) Data frame handling\nI0525 00:01:44.167096 1563 log.go:172] (0xc00041adc0) Data frame received for 5\nI0525 00:01:44.167117 1563 log.go:172] (0xc000495540) (5) Data frame handling\nI0525 00:01:44.169309 1563 log.go:172] (0xc00041adc0) Data frame received for 1\nI0525 00:01:44.169360 1563 log.go:172] (0xc000521680) (1) Data frame handling\nI0525 00:01:44.169379 1563 log.go:172] (0xc000521680) (1) Data frame sent\nI0525 00:01:44.169423 1563 log.go:172] (0xc00041adc0) (0xc000521680) Stream removed, broadcasting: 1\nI0525 00:01:44.169470 1563 log.go:172] (0xc00041adc0) Go away received\nI0525 00:01:44.169731 1563 log.go:172] (0xc00041adc0) (0xc000521680) Stream removed, broadcasting: 1\nI0525 00:01:44.169752 1563 log.go:172] (0xc00041adc0) (0xc0004945a0) Stream removed, broadcasting: 3\nI0525 00:01:44.169760 1563 log.go:172] (0xc00041adc0) (0xc000495540) Stream removed, broadcasting: 5\n" May 25 00:01:44.174: INFO: stdout: "\naffinity-nodeport-transition-54c5z\naffinity-nodeport-transition-p2lqp\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-54c5z\naffinity-nodeport-transition-54c5z\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-p2lqp\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-p2lqp\naffinity-nodeport-transition-p2lqp\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-54c5z\naffinity-nodeport-transition-p2lqp\naffinity-nodeport-transition-54c5z" May 25 00:01:44.174: INFO: Received response from host: May 25 00:01:44.174: INFO: Received response from host: affinity-nodeport-transition-54c5z May 25 00:01:44.174: INFO: Received response from host: affinity-nodeport-transition-p2lqp May 25 00:01:44.174: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.174: INFO: Received response from host: affinity-nodeport-transition-54c5z May 25 00:01:44.174: INFO: Received response from host: affinity-nodeport-transition-54c5z May 25 00:01:44.174: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.174: INFO: Received response from host: affinity-nodeport-transition-p2lqp May 25 00:01:44.174: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.174: INFO: Received response from host: affinity-nodeport-transition-p2lqp May 25 00:01:44.174: INFO: Received response from host: affinity-nodeport-transition-p2lqp May 25 00:01:44.174: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.174: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.174: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.174: INFO: Received response from host: affinity-nodeport-transition-54c5z May 25 00:01:44.174: INFO: Received response from host: affinity-nodeport-transition-p2lqp May 25 00:01:44.174: INFO: Received response from host: affinity-nodeport-transition-54c5z May 25 00:01:44.184: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9961 execpod-affinityztdpw -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:32564/ ; done' May 25 00:01:44.518: INFO: stderr: "I0525 00:01:44.357928 1584 log.go:172] (0xc000b86000) (0xc0008d8320) Create stream\nI0525 00:01:44.357985 1584 log.go:172] (0xc000b86000) (0xc0008d8320) Stream added, broadcasting: 1\nI0525 00:01:44.361019 1584 log.go:172] (0xc000b86000) Reply frame received for 1\nI0525 00:01:44.361077 1584 log.go:172] (0xc000b86000) (0xc0008ca3c0) Create stream\nI0525 00:01:44.361098 1584 log.go:172] (0xc000b86000) (0xc0008ca3c0) Stream added, broadcasting: 3\nI0525 00:01:44.362300 1584 log.go:172] (0xc000b86000) Reply frame received for 3\nI0525 00:01:44.362353 1584 log.go:172] (0xc000b86000) (0xc0008c4d20) Create stream\nI0525 00:01:44.362373 1584 log.go:172] (0xc000b86000) (0xc0008c4d20) Stream added, broadcasting: 5\nI0525 00:01:44.363230 1584 log.go:172] (0xc000b86000) Reply frame received for 5\nI0525 00:01:44.428595 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.428628 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.428643 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.428666 1584 log.go:172] (0xc000b86000) Data frame received for 5\nI0525 00:01:44.428674 1584 log.go:172] (0xc0008c4d20) (5) Data frame handling\nI0525 00:01:44.428688 1584 log.go:172] (0xc0008c4d20) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.435385 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.435408 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.435424 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.436241 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.436269 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.436280 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.436294 1584 log.go:172] (0xc000b86000) Data frame received for 5\nI0525 00:01:44.436302 1584 log.go:172] (0xc0008c4d20) (5) Data frame handling\nI0525 00:01:44.436311 1584 log.go:172] (0xc0008c4d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.440193 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.440219 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.440239 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.440899 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.440939 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.440966 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.440995 1584 log.go:172] (0xc000b86000) Data frame received for 5\nI0525 00:01:44.441005 1584 log.go:172] (0xc0008c4d20) (5) Data frame handling\nI0525 00:01:44.441039 1584 log.go:172] (0xc0008c4d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.445416 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.445448 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.445477 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.446211 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.446250 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.446272 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.446295 1584 log.go:172] (0xc000b86000) Data frame received for 5\nI0525 00:01:44.446317 1584 log.go:172] (0xc0008c4d20) (5) Data frame handling\nI0525 00:01:44.446338 1584 log.go:172] (0xc0008c4d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.449476 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.449510 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.449539 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.450411 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.450431 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.450462 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.450479 1584 log.go:172] (0xc000b86000) Data frame received for 5\nI0525 00:01:44.450495 1584 log.go:172] (0xc0008c4d20) (5) Data frame handling\nI0525 00:01:44.450511 1584 log.go:172] (0xc0008c4d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.454138 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.454166 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.454187 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.454517 1584 log.go:172] (0xc000b86000) Data frame received for 5\nI0525 00:01:44.454531 1584 log.go:172] (0xc0008c4d20) (5) Data frame handling\nI0525 00:01:44.454540 1584 log.go:172] (0xc0008c4d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.454607 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.454636 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.454654 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.461015 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.461039 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.461059 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.462047 1584 log.go:172] (0xc000b86000) Data frame received for 5\nI0525 00:01:44.462097 1584 log.go:172] (0xc0008c4d20) (5) Data frame handling\nI0525 00:01:44.462125 1584 log.go:172] (0xc0008c4d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.462168 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.462194 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.462232 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.466737 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.466771 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.466817 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.467252 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.467305 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.467329 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.467351 1584 log.go:172] (0xc000b86000) Data frame received for 5\nI0525 00:01:44.467365 1584 log.go:172] (0xc0008c4d20) (5) Data frame handling\nI0525 00:01:44.467395 1584 log.go:172] (0xc0008c4d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.471608 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.471629 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.471646 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.472042 1584 log.go:172] (0xc000b86000) Data frame received for 5\nI0525 00:01:44.472059 1584 log.go:172] (0xc0008c4d20) (5) Data frame handling\nI0525 00:01:44.472081 1584 log.go:172] (0xc0008c4d20) (5) Data frame sent\nI0525 00:01:44.472092 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.472098 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.472103 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.476016 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.476028 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.476034 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.476613 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.476647 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.476660 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.476681 1584 log.go:172] (0xc000b86000) Data frame received for 5\nI0525 00:01:44.476700 1584 log.go:172] (0xc0008c4d20) (5) Data frame handling\nI0525 00:01:44.476736 1584 log.go:172] (0xc0008c4d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.480072 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.480108 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.480144 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.480270 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.480293 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.480303 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.480331 1584 log.go:172] (0xc000b86000) Data frame received for 5\nI0525 00:01:44.480352 1584 log.go:172] (0xc0008c4d20) (5) Data frame handling\nI0525 00:01:44.480375 1584 log.go:172] (0xc0008c4d20) (5) Data frame sent\nI0525 00:01:44.480389 1584 log.go:172] (0xc000b86000) Data frame received for 5\nI0525 00:01:44.480402 1584 log.go:172] (0xc0008c4d20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.480429 1584 log.go:172] (0xc0008c4d20) (5) Data frame sent\nI0525 00:01:44.484821 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.484844 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.484862 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.485558 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.485580 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.485595 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.485611 1584 log.go:172] (0xc000b86000) Data frame received for 5\nI0525 00:01:44.485620 1584 log.go:172] (0xc0008c4d20) (5) Data frame handling\nI0525 00:01:44.485630 1584 log.go:172] (0xc0008c4d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.489533 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.489554 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.489568 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.489865 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.489891 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.489916 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.489942 1584 log.go:172] (0xc000b86000) Data frame received for 5\nI0525 00:01:44.489964 1584 log.go:172] (0xc0008c4d20) (5) Data frame handling\nI0525 00:01:44.489984 1584 log.go:172] (0xc0008c4d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.494997 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.495020 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.495040 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.495576 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.495600 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.495620 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.495641 1584 log.go:172] (0xc000b86000) Data frame received for 5\nI0525 00:01:44.495662 1584 log.go:172] (0xc0008c4d20) (5) Data frame handling\nI0525 00:01:44.495682 1584 log.go:172] (0xc0008c4d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.500351 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.500378 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.500397 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.500671 1584 log.go:172] (0xc000b86000) Data frame received for 5\nI0525 00:01:44.500690 1584 log.go:172] (0xc0008c4d20) (5) Data frame handling\nI0525 00:01:44.500709 1584 log.go:172] (0xc0008c4d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.500908 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.500931 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.500960 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.505792 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.505819 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.505839 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.506460 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.506477 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.506488 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.506513 1584 log.go:172] (0xc000b86000) Data frame received for 5\nI0525 00:01:44.506543 1584 log.go:172] (0xc0008c4d20) (5) Data frame handling\nI0525 00:01:44.506565 1584 log.go:172] (0xc0008c4d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32564/\nI0525 00:01:44.509687 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.509745 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.509771 1584 log.go:172] (0xc0008ca3c0) (3) Data frame sent\nI0525 00:01:44.510134 1584 log.go:172] (0xc000b86000) Data frame received for 3\nI0525 00:01:44.510148 1584 log.go:172] (0xc0008ca3c0) (3) Data frame handling\nI0525 00:01:44.510379 1584 log.go:172] (0xc000b86000) Data frame received for 5\nI0525 00:01:44.510394 1584 log.go:172] (0xc0008c4d20) (5) Data frame handling\nI0525 00:01:44.512243 1584 log.go:172] (0xc000b86000) Data frame received for 1\nI0525 00:01:44.512261 1584 log.go:172] (0xc0008d8320) (1) Data frame handling\nI0525 00:01:44.512273 1584 log.go:172] (0xc0008d8320) (1) Data frame sent\nI0525 00:01:44.512400 1584 log.go:172] (0xc000b86000) (0xc0008d8320) Stream removed, broadcasting: 1\nI0525 00:01:44.512547 1584 log.go:172] (0xc000b86000) Go away received\nI0525 00:01:44.512680 1584 log.go:172] (0xc000b86000) (0xc0008d8320) Stream removed, broadcasting: 1\nI0525 00:01:44.512691 1584 log.go:172] (0xc000b86000) (0xc0008ca3c0) Stream removed, broadcasting: 3\nI0525 00:01:44.512696 1584 log.go:172] (0xc000b86000) (0xc0008c4d20) Stream removed, broadcasting: 5\n" May 25 00:01:44.518: INFO: stdout: "\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-2zvxl\naffinity-nodeport-transition-2zvxl" May 25 00:01:44.518: INFO: Received response from host: May 25 00:01:44.518: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.518: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.518: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.518: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.518: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.518: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.518: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.518: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.518: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.518: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.518: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.518: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.518: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.518: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.518: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.518: INFO: Received response from host: affinity-nodeport-transition-2zvxl May 25 00:01:44.518: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-9961, will wait for the garbage collector to delete the pods May 25 00:01:44.647: INFO: Deleting ReplicationController affinity-nodeport-transition took: 25.214973ms May 25 00:01:45.048: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 400.261458ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:01:55.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9961" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:23.886 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":50,"skipped":521,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:01:55.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0525 00:02:36.309981 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 25 00:02:36.310: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:02:36.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6087" for this suite. • [SLOW TEST:40.863 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":288,"completed":51,"skipped":549,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:02:36.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-36e24013-f237-436c-b737-f22bd99e1cf4 STEP: Creating a pod to test consume secrets May 25 00:02:36.418: INFO: Waiting up to 5m0s for pod "pod-secrets-5595749e-16ed-4e35-a43f-4f2f965663c8" in namespace "secrets-1525" to be "Succeeded or Failed" May 25 00:02:36.421: INFO: Pod "pod-secrets-5595749e-16ed-4e35-a43f-4f2f965663c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.970117ms May 25 00:02:38.426: INFO: Pod "pod-secrets-5595749e-16ed-4e35-a43f-4f2f965663c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007249089s May 25 00:02:40.430: INFO: Pod "pod-secrets-5595749e-16ed-4e35-a43f-4f2f965663c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011672259s STEP: Saw pod success May 25 00:02:40.430: INFO: Pod "pod-secrets-5595749e-16ed-4e35-a43f-4f2f965663c8" satisfied condition "Succeeded or Failed" May 25 00:02:40.433: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-5595749e-16ed-4e35-a43f-4f2f965663c8 container secret-volume-test: STEP: delete the pod May 25 00:02:40.552: INFO: Waiting for pod pod-secrets-5595749e-16ed-4e35-a43f-4f2f965663c8 to disappear May 25 00:02:40.568: INFO: Pod pod-secrets-5595749e-16ed-4e35-a43f-4f2f965663c8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:02:40.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1525" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":52,"skipped":551,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:02:40.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 25 00:02:40.702: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 00:02:40.729: INFO: Waiting for terminating namespaces to be deleted... May 25 00:02:40.731: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 25 00:02:40.736: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 25 00:02:40.736: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 25 00:02:40.736: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 25 00:02:40.736: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 25 00:02:40.736: INFO: simpletest.rc-gpv57 from gc-6087 started at 2020-05-25 00:01:55 +0000 UTC (1 container statuses recorded) May 25 00:02:40.736: INFO: Container nginx ready: true, restart count 0 May 25 00:02:40.736: INFO: simpletest.rc-ldzxz from gc-6087 started at 2020-05-25 00:01:55 +0000 UTC (1 container statuses recorded) May 25 00:02:40.736: INFO: Container nginx ready: true, restart count 0 May 25 00:02:40.736: INFO: simpletest.rc-lscpm from gc-6087 started at 2020-05-25 00:01:55 +0000 UTC (1 container statuses recorded) May 25 00:02:40.736: INFO: Container nginx ready: true, restart count 0 May 25 00:02:40.736: INFO: simpletest.rc-v94ww from gc-6087 started at 2020-05-25 00:01:55 +0000 UTC (1 container statuses recorded) May 25 00:02:40.736: INFO: Container nginx ready: true, restart count 0 May 25 00:02:40.736: INFO: simpletest.rc-xqqzc from gc-6087 started at 2020-05-25 00:01:55 +0000 UTC (1 container statuses recorded) May 25 00:02:40.736: INFO: Container nginx ready: true, restart count 0 May 25 00:02:40.736: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 25 00:02:40.736: INFO: Container kindnet-cni ready: true, restart count 0 May 25 00:02:40.736: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 25 00:02:40.736: INFO: Container kube-proxy ready: true, restart count 0 May 25 00:02:40.737: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 25 00:02:40.742: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 25 00:02:40.742: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 25 00:02:40.742: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 25 00:02:40.742: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 25 00:02:40.742: INFO: simpletest.rc-2cvqn from gc-6087 started at 2020-05-25 00:01:55 +0000 UTC (1 container statuses recorded) May 25 00:02:40.742: INFO: Container nginx ready: true, restart count 0 May 25 00:02:40.742: INFO: simpletest.rc-42l5j from gc-6087 started at 2020-05-25 00:01:55 +0000 UTC (1 container statuses recorded) May 25 00:02:40.742: INFO: Container nginx ready: true, restart count 0 May 25 00:02:40.742: INFO: simpletest.rc-4f6tx from gc-6087 started at 2020-05-25 00:01:55 +0000 UTC (1 container statuses recorded) May 25 00:02:40.742: INFO: Container nginx ready: true, restart count 0 May 25 00:02:40.742: INFO: simpletest.rc-hw5c9 from gc-6087 started at 2020-05-25 00:01:55 +0000 UTC (1 container statuses recorded) May 25 00:02:40.742: INFO: Container nginx ready: true, restart count 0 May 25 00:02:40.742: INFO: simpletest.rc-z9s76 from gc-6087 started at 2020-05-25 00:01:55 +0000 UTC (1 container statuses recorded) May 25 00:02:40.742: INFO: Container nginx ready: true, restart count 0 May 25 00:02:40.742: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 25 00:02:40.742: INFO: Container kindnet-cni ready: true, restart count 0 May 25 00:02:40.742: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 25 00:02:40.742: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-98251474-652e-4ab8-abfb-35a9a29f0f90 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-98251474-652e-4ab8-abfb-35a9a29f0f90 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-98251474-652e-4ab8-abfb-35a9a29f0f90 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:02:51.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-556" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:11.229 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":288,"completed":53,"skipped":555,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:02:51.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:02:51.872: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:02:52.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5068" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":288,"completed":54,"skipped":560,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:02:52.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8289 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-8289 I0525 00:02:52.777349 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8289, replica count: 2 I0525 00:02:55.827825 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 00:02:58.828034 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 00:02:58.828: INFO: Creating new exec pod May 25 00:03:03.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8289 execpodsllj6 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 25 00:03:04.125: INFO: stderr: "I0525 00:03:04.019076 1604 log.go:172] (0xc0009d20b0) (0xc00061dc20) Create stream\nI0525 00:03:04.019143 1604 log.go:172] (0xc0009d20b0) (0xc00061dc20) Stream added, broadcasting: 1\nI0525 00:03:04.021552 1604 log.go:172] (0xc0009d20b0) Reply frame received for 1\nI0525 00:03:04.021602 1604 log.go:172] (0xc0009d20b0) (0xc0005e8d20) Create stream\nI0525 00:03:04.021617 1604 log.go:172] (0xc0009d20b0) (0xc0005e8d20) Stream added, broadcasting: 3\nI0525 00:03:04.022574 1604 log.go:172] (0xc0009d20b0) Reply frame received for 3\nI0525 00:03:04.022609 1604 log.go:172] (0xc0009d20b0) (0xc0005e05a0) Create stream\nI0525 00:03:04.022620 1604 log.go:172] (0xc0009d20b0) (0xc0005e05a0) Stream added, broadcasting: 5\nI0525 00:03:04.023344 1604 log.go:172] (0xc0009d20b0) Reply frame received for 5\nI0525 00:03:04.115097 1604 log.go:172] (0xc0009d20b0) Data frame received for 5\nI0525 00:03:04.115142 1604 log.go:172] (0xc0005e05a0) (5) Data frame handling\nI0525 00:03:04.115183 1604 log.go:172] (0xc0005e05a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0525 00:03:04.116113 1604 log.go:172] (0xc0009d20b0) Data frame received for 5\nI0525 00:03:04.116144 1604 log.go:172] (0xc0005e05a0) (5) Data frame handling\nI0525 00:03:04.116168 1604 log.go:172] (0xc0005e05a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0525 00:03:04.116604 1604 log.go:172] (0xc0009d20b0) Data frame received for 3\nI0525 00:03:04.116662 1604 log.go:172] (0xc0005e8d20) (3) Data frame handling\nI0525 00:03:04.116702 1604 log.go:172] (0xc0009d20b0) Data frame received for 5\nI0525 00:03:04.116726 1604 log.go:172] (0xc0005e05a0) (5) Data frame handling\nI0525 00:03:04.119178 1604 log.go:172] (0xc0009d20b0) Data frame received for 1\nI0525 00:03:04.119211 1604 log.go:172] (0xc00061dc20) (1) Data frame handling\nI0525 00:03:04.119231 1604 log.go:172] (0xc00061dc20) (1) Data frame sent\nI0525 00:03:04.119255 1604 log.go:172] (0xc0009d20b0) (0xc00061dc20) Stream removed, broadcasting: 1\nI0525 00:03:04.119291 1604 log.go:172] (0xc0009d20b0) Go away received\nI0525 00:03:04.119674 1604 log.go:172] (0xc0009d20b0) (0xc00061dc20) Stream removed, broadcasting: 1\nI0525 00:03:04.119693 1604 log.go:172] (0xc0009d20b0) (0xc0005e8d20) Stream removed, broadcasting: 3\nI0525 00:03:04.119708 1604 log.go:172] (0xc0009d20b0) (0xc0005e05a0) Stream removed, broadcasting: 5\n" May 25 00:03:04.126: INFO: stdout: "" May 25 00:03:04.126: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8289 execpodsllj6 -- /bin/sh -x -c nc -zv -t -w 2 10.105.3.235 80' May 25 00:03:04.342: INFO: stderr: "I0525 00:03:04.271872 1626 log.go:172] (0xc0009bda20) (0xc0006b4f00) Create stream\nI0525 00:03:04.271924 1626 log.go:172] (0xc0009bda20) (0xc0006b4f00) Stream added, broadcasting: 1\nI0525 00:03:04.277975 1626 log.go:172] (0xc0009bda20) Reply frame received for 1\nI0525 00:03:04.278017 1626 log.go:172] (0xc0009bda20) (0xc0006d8e60) Create stream\nI0525 00:03:04.278027 1626 log.go:172] (0xc0009bda20) (0xc0006d8e60) Stream added, broadcasting: 3\nI0525 00:03:04.279374 1626 log.go:172] (0xc0009bda20) Reply frame received for 3\nI0525 00:03:04.279405 1626 log.go:172] (0xc0009bda20) (0xc0003966e0) Create stream\nI0525 00:03:04.279426 1626 log.go:172] (0xc0009bda20) (0xc0003966e0) Stream added, broadcasting: 5\nI0525 00:03:04.281459 1626 log.go:172] (0xc0009bda20) Reply frame received for 5\nI0525 00:03:04.335909 1626 log.go:172] (0xc0009bda20) Data frame received for 3\nI0525 00:03:04.335970 1626 log.go:172] (0xc0006d8e60) (3) Data frame handling\nI0525 00:03:04.336006 1626 log.go:172] (0xc0009bda20) Data frame received for 5\nI0525 00:03:04.336025 1626 log.go:172] (0xc0003966e0) (5) Data frame handling\nI0525 00:03:04.336049 1626 log.go:172] (0xc0003966e0) (5) Data frame sent\nI0525 00:03:04.336081 1626 log.go:172] (0xc0009bda20) Data frame received for 5\nI0525 00:03:04.336093 1626 log.go:172] (0xc0003966e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.3.235 80\nConnection to 10.105.3.235 80 port [tcp/http] succeeded!\nI0525 00:03:04.338103 1626 log.go:172] (0xc0009bda20) Data frame received for 1\nI0525 00:03:04.338138 1626 log.go:172] (0xc0006b4f00) (1) Data frame handling\nI0525 00:03:04.338155 1626 log.go:172] (0xc0006b4f00) (1) Data frame sent\nI0525 00:03:04.338173 1626 log.go:172] (0xc0009bda20) (0xc0006b4f00) Stream removed, broadcasting: 1\nI0525 00:03:04.338196 1626 log.go:172] (0xc0009bda20) Go away received\nI0525 00:03:04.338679 1626 log.go:172] (0xc0009bda20) (0xc0006b4f00) Stream removed, broadcasting: 1\nI0525 00:03:04.338710 1626 log.go:172] (0xc0009bda20) (0xc0006d8e60) Stream removed, broadcasting: 3\nI0525 00:03:04.338723 1626 log.go:172] (0xc0009bda20) (0xc0003966e0) Stream removed, broadcasting: 5\n" May 25 00:03:04.342: INFO: stdout: "" May 25 00:03:04.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8289 execpodsllj6 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31712' May 25 00:03:04.545: INFO: stderr: "I0525 00:03:04.465828 1642 log.go:172] (0xc000b5f130) (0xc0006f1e00) Create stream\nI0525 00:03:04.465876 1642 log.go:172] (0xc000b5f130) (0xc0006f1e00) Stream added, broadcasting: 1\nI0525 00:03:04.468985 1642 log.go:172] (0xc000b5f130) Reply frame received for 1\nI0525 00:03:04.469386 1642 log.go:172] (0xc000b5f130) (0xc0006a8d20) Create stream\nI0525 00:03:04.469506 1642 log.go:172] (0xc000b5f130) (0xc0006a8d20) Stream added, broadcasting: 3\nI0525 00:03:04.471770 1642 log.go:172] (0xc000b5f130) Reply frame received for 3\nI0525 00:03:04.471817 1642 log.go:172] (0xc000b5f130) (0xc00026b9a0) Create stream\nI0525 00:03:04.471846 1642 log.go:172] (0xc000b5f130) (0xc00026b9a0) Stream added, broadcasting: 5\nI0525 00:03:04.473299 1642 log.go:172] (0xc000b5f130) Reply frame received for 5\nI0525 00:03:04.536498 1642 log.go:172] (0xc000b5f130) Data frame received for 3\nI0525 00:03:04.536527 1642 log.go:172] (0xc0006a8d20) (3) Data frame handling\nI0525 00:03:04.536562 1642 log.go:172] (0xc000b5f130) Data frame received for 5\nI0525 00:03:04.536598 1642 log.go:172] (0xc00026b9a0) (5) Data frame handling\nI0525 00:03:04.536618 1642 log.go:172] (0xc00026b9a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 31712\nConnection to 172.17.0.13 31712 port [tcp/31712] succeeded!\nI0525 00:03:04.536746 1642 log.go:172] (0xc000b5f130) Data frame received for 5\nI0525 00:03:04.536767 1642 log.go:172] (0xc00026b9a0) (5) Data frame handling\nI0525 00:03:04.540989 1642 log.go:172] (0xc000b5f130) Data frame received for 1\nI0525 00:03:04.541014 1642 log.go:172] (0xc0006f1e00) (1) Data frame handling\nI0525 00:03:04.541032 1642 log.go:172] (0xc0006f1e00) (1) Data frame sent\nI0525 00:03:04.541156 1642 log.go:172] (0xc000b5f130) (0xc0006f1e00) Stream removed, broadcasting: 1\nI0525 00:03:04.541178 1642 log.go:172] (0xc000b5f130) Go away received\nI0525 00:03:04.541567 1642 log.go:172] (0xc000b5f130) (0xc0006f1e00) Stream removed, broadcasting: 1\nI0525 00:03:04.541586 1642 log.go:172] (0xc000b5f130) (0xc0006a8d20) Stream removed, broadcasting: 3\nI0525 00:03:04.541593 1642 log.go:172] (0xc000b5f130) (0xc00026b9a0) Stream removed, broadcasting: 5\n" May 25 00:03:04.545: INFO: stdout: "" May 25 00:03:04.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8289 execpodsllj6 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31712' May 25 00:03:04.742: INFO: stderr: "I0525 00:03:04.670461 1663 log.go:172] (0xc00003a420) (0xc00039ed20) Create stream\nI0525 00:03:04.670513 1663 log.go:172] (0xc00003a420) (0xc00039ed20) Stream added, broadcasting: 1\nI0525 00:03:04.673348 1663 log.go:172] (0xc00003a420) Reply frame received for 1\nI0525 00:03:04.673390 1663 log.go:172] (0xc00003a420) (0xc00038e460) Create stream\nI0525 00:03:04.673403 1663 log.go:172] (0xc00003a420) (0xc00038e460) Stream added, broadcasting: 3\nI0525 00:03:04.674421 1663 log.go:172] (0xc00003a420) Reply frame received for 3\nI0525 00:03:04.674446 1663 log.go:172] (0xc00003a420) (0xc0000f1ae0) Create stream\nI0525 00:03:04.674461 1663 log.go:172] (0xc00003a420) (0xc0000f1ae0) Stream added, broadcasting: 5\nI0525 00:03:04.675454 1663 log.go:172] (0xc00003a420) Reply frame received for 5\nI0525 00:03:04.735355 1663 log.go:172] (0xc00003a420) Data frame received for 3\nI0525 00:03:04.735392 1663 log.go:172] (0xc00038e460) (3) Data frame handling\nI0525 00:03:04.735420 1663 log.go:172] (0xc00003a420) Data frame received for 5\nI0525 00:03:04.735430 1663 log.go:172] (0xc0000f1ae0) (5) Data frame handling\nI0525 00:03:04.735438 1663 log.go:172] (0xc0000f1ae0) (5) Data frame sent\nI0525 00:03:04.735445 1663 log.go:172] (0xc00003a420) Data frame received for 5\nI0525 00:03:04.735451 1663 log.go:172] (0xc0000f1ae0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31712\nConnection to 172.17.0.12 31712 port [tcp/31712] succeeded!\nI0525 00:03:04.736661 1663 log.go:172] (0xc00003a420) Data frame received for 1\nI0525 00:03:04.736676 1663 log.go:172] (0xc00039ed20) (1) Data frame handling\nI0525 00:03:04.736689 1663 log.go:172] (0xc00039ed20) (1) Data frame sent\nI0525 00:03:04.736761 1663 log.go:172] (0xc00003a420) (0xc00039ed20) Stream removed, broadcasting: 1\nI0525 00:03:04.736938 1663 log.go:172] (0xc00003a420) Go away received\nI0525 00:03:04.737348 1663 log.go:172] (0xc00003a420) (0xc00039ed20) Stream removed, broadcasting: 1\nI0525 00:03:04.737376 1663 log.go:172] (0xc00003a420) (0xc00038e460) Stream removed, broadcasting: 3\nI0525 00:03:04.737391 1663 log.go:172] (0xc00003a420) (0xc0000f1ae0) Stream removed, broadcasting: 5\n" May 25 00:03:04.743: INFO: stdout: "" May 25 00:03:04.743: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:03:04.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8289" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.374 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":288,"completed":55,"skipped":564,"failed":0} SSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:03:04.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod May 25 00:03:09.057: INFO: Pod pod-hostip-3abb9f89-c918-4ef5-90f3-4eeb1df880f1 has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:03:09.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2322" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":288,"completed":56,"skipped":568,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:03:09.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:03:09.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2527" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":288,"completed":57,"skipped":604,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:03:09.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:03:09.353: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 25 00:03:12.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5346 create -f -' May 25 00:03:17.480: INFO: stderr: "" May 25 00:03:17.480: INFO: stdout: "e2e-test-crd-publish-openapi-9687-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 25 00:03:17.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5346 delete e2e-test-crd-publish-openapi-9687-crds test-cr' May 25 00:03:17.620: INFO: stderr: "" May 25 00:03:17.620: INFO: stdout: "e2e-test-crd-publish-openapi-9687-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 25 00:03:17.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5346 apply -f -' May 25 00:03:17.880: INFO: stderr: "" May 25 00:03:17.880: INFO: stdout: "e2e-test-crd-publish-openapi-9687-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 25 00:03:17.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5346 delete e2e-test-crd-publish-openapi-9687-crds test-cr' May 25 00:03:17.978: INFO: stderr: "" May 25 00:03:17.979: INFO: stdout: "e2e-test-crd-publish-openapi-9687-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 25 00:03:17.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9687-crds' May 25 00:03:18.219: INFO: stderr: "" May 25 00:03:18.219: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9687-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:03:21.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5346" for this suite. • [SLOW TEST:11.954 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":288,"completed":58,"skipped":607,"failed":0} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:03:21.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 25 00:03:29.442: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 25 00:03:29.468: INFO: Pod pod-with-poststart-http-hook still exists May 25 00:03:31.468: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 25 00:03:31.490: INFO: Pod pod-with-poststart-http-hook still exists May 25 00:03:33.468: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 25 00:03:33.474: INFO: Pod pod-with-poststart-http-hook still exists May 25 00:03:35.468: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 25 00:03:35.473: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:03:35.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5884" for this suite. • [SLOW TEST:14.292 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":288,"completed":59,"skipped":608,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:03:35.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 25 00:03:35.682: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:35.706: INFO: Number of nodes with available pods: 0 May 25 00:03:35.706: INFO: Node latest-worker is running more than one daemon pod May 25 00:03:36.711: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:36.715: INFO: Number of nodes with available pods: 0 May 25 00:03:36.715: INFO: Node latest-worker is running more than one daemon pod May 25 00:03:37.711: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:37.715: INFO: Number of nodes with available pods: 0 May 25 00:03:37.715: INFO: Node latest-worker is running more than one daemon pod May 25 00:03:38.712: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:38.716: INFO: Number of nodes with available pods: 0 May 25 00:03:38.716: INFO: Node latest-worker is running more than one daemon pod May 25 00:03:39.711: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:39.715: INFO: Number of nodes with available pods: 1 May 25 00:03:39.715: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:03:40.710: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:40.713: INFO: Number of nodes with available pods: 2 May 25 00:03:40.713: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 25 00:03:40.790: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:40.793: INFO: Number of nodes with available pods: 1 May 25 00:03:40.793: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:03:41.983: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:41.987: INFO: Number of nodes with available pods: 1 May 25 00:03:41.987: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:03:42.819: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:42.822: INFO: Number of nodes with available pods: 1 May 25 00:03:42.822: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:03:43.799: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:43.802: INFO: Number of nodes with available pods: 1 May 25 00:03:43.802: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:03:44.808: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:44.812: INFO: Number of nodes with available pods: 1 May 25 00:03:44.812: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:03:45.799: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:45.803: INFO: Number of nodes with available pods: 1 May 25 00:03:45.803: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:03:46.798: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:46.801: INFO: Number of nodes with available pods: 1 May 25 00:03:46.801: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:03:47.799: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:47.803: INFO: Number of nodes with available pods: 1 May 25 00:03:47.803: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:03:48.799: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:48.811: INFO: Number of nodes with available pods: 1 May 25 00:03:48.811: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:03:49.799: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:49.802: INFO: Number of nodes with available pods: 1 May 25 00:03:49.802: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:03:50.799: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:50.803: INFO: Number of nodes with available pods: 1 May 25 00:03:50.803: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:03:51.803: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:51.807: INFO: Number of nodes with available pods: 1 May 25 00:03:51.807: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:03:52.799: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:52.802: INFO: Number of nodes with available pods: 1 May 25 00:03:52.802: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:03:53.799: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:53.803: INFO: Number of nodes with available pods: 1 May 25 00:03:53.803: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:03:54.799: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:54.802: INFO: Number of nodes with available pods: 1 May 25 00:03:54.802: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:03:55.798: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:55.802: INFO: Number of nodes with available pods: 1 May 25 00:03:55.802: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:03:56.799: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:56.803: INFO: Number of nodes with available pods: 1 May 25 00:03:56.803: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:03:57.801: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:57.804: INFO: Number of nodes with available pods: 1 May 25 00:03:57.804: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:03:58.799: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:03:58.803: INFO: Number of nodes with available pods: 2 May 25 00:03:58.803: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7409, will wait for the garbage collector to delete the pods May 25 00:03:58.867: INFO: Deleting DaemonSet.extensions daemon-set took: 6.950263ms May 25 00:03:59.267: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.266484ms May 25 00:04:05.292: INFO: Number of nodes with available pods: 0 May 25 00:04:05.292: INFO: Number of running nodes: 0, number of available pods: 0 May 25 00:04:05.300: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7409/daemonsets","resourceVersion":"7412526"},"items":null} May 25 00:04:05.331: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7409/pods","resourceVersion":"7412527"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:04:05.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7409" for this suite. • [SLOW TEST:29.876 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":288,"completed":60,"skipped":625,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:04:05.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-ad38b5ea-601c-455f-b2de-37155209e65a STEP: Creating a pod to test consume secrets May 25 00:04:05.430: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-41b50eca-e80a-483e-af16-258c8786feae" in namespace "projected-9009" to be "Succeeded or Failed" May 25 00:04:05.449: INFO: Pod "pod-projected-secrets-41b50eca-e80a-483e-af16-258c8786feae": Phase="Pending", Reason="", readiness=false. Elapsed: 18.920296ms May 25 00:04:07.512: INFO: Pod "pod-projected-secrets-41b50eca-e80a-483e-af16-258c8786feae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082437083s May 25 00:04:09.517: INFO: Pod "pod-projected-secrets-41b50eca-e80a-483e-af16-258c8786feae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086982214s STEP: Saw pod success May 25 00:04:09.517: INFO: Pod "pod-projected-secrets-41b50eca-e80a-483e-af16-258c8786feae" satisfied condition "Succeeded or Failed" May 25 00:04:09.520: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-41b50eca-e80a-483e-af16-258c8786feae container projected-secret-volume-test: STEP: delete the pod May 25 00:04:09.796: INFO: Waiting for pod pod-projected-secrets-41b50eca-e80a-483e-af16-258c8786feae to disappear May 25 00:04:09.799: INFO: Pod pod-projected-secrets-41b50eca-e80a-483e-af16-258c8786feae no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:04:09.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9009" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":61,"skipped":643,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:04:09.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:04:14.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2867" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":288,"completed":62,"skipped":648,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:04:14.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:04:18.495: INFO: Waiting up to 5m0s for pod "client-envvars-f6b0cf4c-8799-466f-89b8-85db435f855e" in namespace "pods-1423" to be "Succeeded or Failed" May 25 00:04:18.516: INFO: Pod "client-envvars-f6b0cf4c-8799-466f-89b8-85db435f855e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.971305ms May 25 00:04:20.521: INFO: Pod "client-envvars-f6b0cf4c-8799-466f-89b8-85db435f855e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025921077s May 25 00:04:22.526: INFO: Pod "client-envvars-f6b0cf4c-8799-466f-89b8-85db435f855e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030993955s STEP: Saw pod success May 25 00:04:22.526: INFO: Pod "client-envvars-f6b0cf4c-8799-466f-89b8-85db435f855e" satisfied condition "Succeeded or Failed" May 25 00:04:22.530: INFO: Trying to get logs from node latest-worker2 pod client-envvars-f6b0cf4c-8799-466f-89b8-85db435f855e container env3cont: STEP: delete the pod May 25 00:04:22.679: INFO: Waiting for pod client-envvars-f6b0cf4c-8799-466f-89b8-85db435f855e to disappear May 25 00:04:22.716: INFO: Pod client-envvars-f6b0cf4c-8799-466f-89b8-85db435f855e no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:04:22.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1423" for this suite. • [SLOW TEST:8.657 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":288,"completed":63,"skipped":684,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:04:22.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 25 00:04:22.879: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:04:22.912: INFO: Number of nodes with available pods: 0 May 25 00:04:22.912: INFO: Node latest-worker is running more than one daemon pod May 25 00:04:23.917: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:04:23.920: INFO: Number of nodes with available pods: 0 May 25 00:04:23.920: INFO: Node latest-worker is running more than one daemon pod May 25 00:04:24.997: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:04:25.000: INFO: Number of nodes with available pods: 0 May 25 00:04:25.000: INFO: Node latest-worker is running more than one daemon pod May 25 00:04:25.917: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:04:25.920: INFO: Number of nodes with available pods: 0 May 25 00:04:25.920: INFO: Node latest-worker is running more than one daemon pod May 25 00:04:26.918: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:04:26.922: INFO: Number of nodes with available pods: 0 May 25 00:04:26.922: INFO: Node latest-worker is running more than one daemon pod May 25 00:04:27.916: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:04:27.920: INFO: Number of nodes with available pods: 2 May 25 00:04:27.920: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 25 00:04:27.983: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:04:28.049: INFO: Number of nodes with available pods: 1 May 25 00:04:28.049: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:04:29.054: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:04:29.058: INFO: Number of nodes with available pods: 1 May 25 00:04:29.058: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:04:30.074: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:04:30.078: INFO: Number of nodes with available pods: 1 May 25 00:04:30.078: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:04:31.065: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:04:31.090: INFO: Number of nodes with available pods: 1 May 25 00:04:31.090: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:04:32.064: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:04:32.067: INFO: Number of nodes with available pods: 2 May 25 00:04:32.067: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9972, will wait for the garbage collector to delete the pods May 25 00:04:32.132: INFO: Deleting DaemonSet.extensions daemon-set took: 6.587588ms May 25 00:04:32.533: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.354598ms May 25 00:04:35.436: INFO: Number of nodes with available pods: 0 May 25 00:04:35.436: INFO: Number of running nodes: 0, number of available pods: 0 May 25 00:04:35.438: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9972/daemonsets","resourceVersion":"7412800"},"items":null} May 25 00:04:35.440: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9972/pods","resourceVersion":"7412800"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:04:35.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9972" for this suite. • [SLOW TEST:12.723 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":288,"completed":64,"skipped":718,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:04:35.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 25 00:04:35.652: INFO: Waiting up to 5m0s for pod "pod-084bdd85-ff23-40e4-9f36-92656aac99f7" in namespace "emptydir-9880" to be "Succeeded or Failed" May 25 00:04:35.687: INFO: Pod "pod-084bdd85-ff23-40e4-9f36-92656aac99f7": Phase="Pending", Reason="", readiness=false. Elapsed: 35.076637ms May 25 00:04:37.692: INFO: Pod "pod-084bdd85-ff23-40e4-9f36-92656aac99f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039553776s May 25 00:04:39.696: INFO: Pod "pod-084bdd85-ff23-40e4-9f36-92656aac99f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043682421s STEP: Saw pod success May 25 00:04:39.696: INFO: Pod "pod-084bdd85-ff23-40e4-9f36-92656aac99f7" satisfied condition "Succeeded or Failed" May 25 00:04:39.699: INFO: Trying to get logs from node latest-worker2 pod pod-084bdd85-ff23-40e4-9f36-92656aac99f7 container test-container: STEP: delete the pod May 25 00:04:39.798: INFO: Waiting for pod pod-084bdd85-ff23-40e4-9f36-92656aac99f7 to disappear May 25 00:04:39.813: INFO: Pod pod-084bdd85-ff23-40e4-9f36-92656aac99f7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:04:39.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9880" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":65,"skipped":736,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:04:39.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-e4168248-7f3a-4191-b7b3-4679f8320fc7 STEP: Creating a pod to test consume secrets May 25 00:04:39.938: INFO: Waiting up to 5m0s for pod "pod-secrets-42e4b8ed-3b55-49e3-9048-b4325ac7c8ad" in namespace "secrets-3092" to be "Succeeded or Failed" May 25 00:04:39.945: INFO: Pod "pod-secrets-42e4b8ed-3b55-49e3-9048-b4325ac7c8ad": Phase="Pending", Reason="", readiness=false. Elapsed: 7.360935ms May 25 00:04:41.994: INFO: Pod "pod-secrets-42e4b8ed-3b55-49e3-9048-b4325ac7c8ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056492001s May 25 00:04:43.998: INFO: Pod "pod-secrets-42e4b8ed-3b55-49e3-9048-b4325ac7c8ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060652399s STEP: Saw pod success May 25 00:04:43.998: INFO: Pod "pod-secrets-42e4b8ed-3b55-49e3-9048-b4325ac7c8ad" satisfied condition "Succeeded or Failed" May 25 00:04:44.001: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-42e4b8ed-3b55-49e3-9048-b4325ac7c8ad container secret-volume-test: STEP: delete the pod May 25 00:04:44.060: INFO: Waiting for pod pod-secrets-42e4b8ed-3b55-49e3-9048-b4325ac7c8ad to disappear May 25 00:04:44.064: INFO: Pod pod-secrets-42e4b8ed-3b55-49e3-9048-b4325ac7c8ad no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:04:44.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3092" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":66,"skipped":744,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:04:44.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0525 00:04:54.215670 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 25 00:04:54.215: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:04:54.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6654" for this suite. • [SLOW TEST:10.149 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":288,"completed":67,"skipped":774,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:04:54.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:04:54.300: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 25 00:04:54.317: INFO: Pod name sample-pod: Found 0 pods out of 1 May 25 00:04:59.335: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 25 00:04:59.336: INFO: Creating deployment "test-rolling-update-deployment" May 25 00:04:59.341: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 25 00:04:59.414: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 25 00:05:01.479: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 25 00:05:01.481: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961899, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961899, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961899, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961899, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 00:05:03.486: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 25 00:05:03.496: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4764 /apis/apps/v1/namespaces/deployment-4764/deployments/test-rolling-update-deployment 01cc2b63-3af2-4741-aa91-0f7c925e95da 7413048 1 2020-05-25 00:04:59 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-25 00:04:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-25 00:05:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003896728 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-25 00:04:59 +0000 UTC,LastTransitionTime:2020-05-25 00:04:59 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-05-25 00:05:02 +0000 UTC,LastTransitionTime:2020-05-25 00:04:59 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 25 00:05:03.499: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-4764 /apis/apps/v1/namespaces/deployment-4764/replicasets/test-rolling-update-deployment-df7bb669b 58cf67ef-6cf5-41ee-8274-c7bcde13170e 7413037 1 2020-05-25 00:04:59 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 01cc2b63-3af2-4741-aa91-0f7c925e95da 0xc00384df40 0xc00384df41}] [] [{kube-controller-manager Update apps/v1 2020-05-25 00:05:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01cc2b63-3af2-4741-aa91-0f7c925e95da\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038bc048 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 25 00:05:03.499: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 25 00:05:03.499: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4764 /apis/apps/v1/namespaces/deployment-4764/replicasets/test-rolling-update-controller 965652f3-1d85-44dc-9e34-b0c0b8025ef7 7413047 2 2020-05-25 00:04:54 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 01cc2b63-3af2-4741-aa91-0f7c925e95da 0xc00384dd47 0xc00384dd48}] [] [{e2e.test Update apps/v1 2020-05-25 00:04:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-25 00:05:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01cc2b63-3af2-4741-aa91-0f7c925e95da\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00384de88 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 00:05:03.502: INFO: Pod "test-rolling-update-deployment-df7bb669b-jtzs5" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-jtzs5 test-rolling-update-deployment-df7bb669b- deployment-4764 /api/v1/namespaces/deployment-4764/pods/test-rolling-update-deployment-df7bb669b-jtzs5 aa174e33-fb96-4384-a09d-e78bf18aa67f 7413036 0 2020-05-25 00:04:59 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b 58cf67ef-6cf5-41ee-8274-c7bcde13170e 0xc003896ef0 0xc003896ef1}] [] [{kube-controller-manager Update v1 2020-05-25 00:04:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"58cf67ef-6cf5-41ee-8274-c7bcde13170e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 00:05:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.100\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8gdjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8gdjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8gdjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 00:04:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 00:05:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 00:05:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 00:04:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.100,StartTime:2020-05-25 00:04:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 00:05:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://5d2430591572848a94c94f7fcb157678d5d251a15f3290673b3f69808f8d814d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:05:03.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4764" for this suite. • [SLOW TEST:9.286 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":68,"skipped":842,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:05:03.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 25 00:05:03.834: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fc97afa3-b8a7-4e2f-8f90-3a2763b7f3cd" in namespace "downward-api-6021" to be "Succeeded or Failed" May 25 00:05:03.855: INFO: Pod "downwardapi-volume-fc97afa3-b8a7-4e2f-8f90-3a2763b7f3cd": Phase="Pending", Reason="", readiness=false. Elapsed: 21.11583ms May 25 00:05:05.859: INFO: Pod "downwardapi-volume-fc97afa3-b8a7-4e2f-8f90-3a2763b7f3cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025249466s May 25 00:05:07.864: INFO: Pod "downwardapi-volume-fc97afa3-b8a7-4e2f-8f90-3a2763b7f3cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030085668s STEP: Saw pod success May 25 00:05:07.864: INFO: Pod "downwardapi-volume-fc97afa3-b8a7-4e2f-8f90-3a2763b7f3cd" satisfied condition "Succeeded or Failed" May 25 00:05:07.867: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-fc97afa3-b8a7-4e2f-8f90-3a2763b7f3cd container client-container: STEP: delete the pod May 25 00:05:07.906: INFO: Waiting for pod downwardapi-volume-fc97afa3-b8a7-4e2f-8f90-3a2763b7f3cd to disappear May 25 00:05:07.916: INFO: Pod downwardapi-volume-fc97afa3-b8a7-4e2f-8f90-3a2763b7f3cd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:05:07.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6021" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":69,"skipped":879,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:05:07.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 00:05:08.487: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 00:05:10.704: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961908, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961908, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961908, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961908, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 00:05:13.745: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:05:13.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4743" for this suite. STEP: Destroying namespace "webhook-4743-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.177 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":288,"completed":70,"skipped":882,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:05:14.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 00:05:15.108: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 00:05:17.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961915, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961915, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961915, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961915, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 00:05:20.221: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:05:20.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2433" for this suite. STEP: Destroying namespace "webhook-2433-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.869 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":288,"completed":71,"skipped":883,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:05:20.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 00:05:21.929: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 00:05:23.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961921, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961921, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961922, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725961921, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 00:05:26.992: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:05:26.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5244-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:05:28.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7191" for this suite. STEP: Destroying namespace "webhook-7191-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.244 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":288,"completed":72,"skipped":899,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:05:28.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 25 00:05:28.333: INFO: Waiting up to 5m0s for pod "pod-e4959ac0-77fa-4b77-b38f-0f085ea14547" in namespace "emptydir-4856" to be "Succeeded or Failed" May 25 00:05:28.336: INFO: Pod "pod-e4959ac0-77fa-4b77-b38f-0f085ea14547": Phase="Pending", Reason="", readiness=false. Elapsed: 2.253443ms May 25 00:05:30.340: INFO: Pod "pod-e4959ac0-77fa-4b77-b38f-0f085ea14547": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006801123s May 25 00:05:32.344: INFO: Pod "pod-e4959ac0-77fa-4b77-b38f-0f085ea14547": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01066528s STEP: Saw pod success May 25 00:05:32.344: INFO: Pod "pod-e4959ac0-77fa-4b77-b38f-0f085ea14547" satisfied condition "Succeeded or Failed" May 25 00:05:32.346: INFO: Trying to get logs from node latest-worker pod pod-e4959ac0-77fa-4b77-b38f-0f085ea14547 container test-container: STEP: delete the pod May 25 00:05:32.541: INFO: Waiting for pod pod-e4959ac0-77fa-4b77-b38f-0f085ea14547 to disappear May 25 00:05:32.609: INFO: Pod pod-e4959ac0-77fa-4b77-b38f-0f085ea14547 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:05:32.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4856" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":73,"skipped":918,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:05:32.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-f4f503e8-0e8a-44d0-bd87-3c7351b909a2 STEP: updating the pod May 25 00:05:41.296: INFO: Successfully updated pod "var-expansion-f4f503e8-0e8a-44d0-bd87-3c7351b909a2" STEP: waiting for pod and container restart STEP: Failing liveness probe May 25 00:05:41.314: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-5388 PodName:var-expansion-f4f503e8-0e8a-44d0-bd87-3c7351b909a2 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:05:41.314: INFO: >>> kubeConfig: /root/.kube/config I0525 00:05:41.350419 7 log.go:172] (0xc002df69a0) (0xc0019094a0) Create stream I0525 00:05:41.350462 7 log.go:172] (0xc002df69a0) (0xc0019094a0) Stream added, broadcasting: 1 I0525 00:05:41.352499 7 log.go:172] (0xc002df69a0) Reply frame received for 1 I0525 00:05:41.352547 7 log.go:172] (0xc002df69a0) (0xc000ccad20) Create stream I0525 00:05:41.352563 7 log.go:172] (0xc002df69a0) (0xc000ccad20) Stream added, broadcasting: 3 I0525 00:05:41.353988 7 log.go:172] (0xc002df69a0) Reply frame received for 3 I0525 00:05:41.354042 7 log.go:172] (0xc002df69a0) (0xc001c4a140) Create stream I0525 00:05:41.354059 7 log.go:172] (0xc002df69a0) (0xc001c4a140) Stream added, broadcasting: 5 I0525 00:05:41.355010 7 log.go:172] (0xc002df69a0) Reply frame received for 5 I0525 00:05:41.452991 7 log.go:172] (0xc002df69a0) Data frame received for 5 I0525 00:05:41.453038 7 log.go:172] (0xc001c4a140) (5) Data frame handling I0525 00:05:41.453073 7 log.go:172] (0xc002df69a0) Data frame received for 3 I0525 00:05:41.453091 7 log.go:172] (0xc000ccad20) (3) Data frame handling I0525 00:05:41.454321 7 log.go:172] (0xc002df69a0) Data frame received for 1 I0525 00:05:41.454348 7 log.go:172] (0xc0019094a0) (1) Data frame handling I0525 00:05:41.454364 7 log.go:172] (0xc0019094a0) (1) Data frame sent I0525 00:05:41.454377 7 log.go:172] (0xc002df69a0) (0xc0019094a0) Stream removed, broadcasting: 1 I0525 00:05:41.454394 7 log.go:172] (0xc002df69a0) Go away received I0525 00:05:41.454692 7 log.go:172] (0xc002df69a0) (0xc0019094a0) Stream removed, broadcasting: 1 I0525 00:05:41.454721 7 log.go:172] (0xc002df69a0) (0xc000ccad20) Stream removed, broadcasting: 3 I0525 00:05:41.454742 7 log.go:172] (0xc002df69a0) (0xc001c4a140) Stream removed, broadcasting: 5 May 25 00:05:41.454: INFO: Pod exec output: / STEP: Waiting for container to restart May 25 00:05:41.458: INFO: Container dapi-container, restarts: 0 May 25 00:05:51.479: INFO: Container dapi-container, restarts: 0 May 25 00:06:01.462: INFO: Container dapi-container, restarts: 0 May 25 00:06:11.462: INFO: Container dapi-container, restarts: 0 May 25 00:06:21.463: INFO: Container dapi-container, restarts: 1 May 25 00:06:21.463: INFO: Container has restart count: 1 STEP: Rewriting the file May 25 00:06:21.463: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-5388 PodName:var-expansion-f4f503e8-0e8a-44d0-bd87-3c7351b909a2 ContainerName:side-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:06:21.463: INFO: >>> kubeConfig: /root/.kube/config I0525 00:06:21.500393 7 log.go:172] (0xc002ceba20) (0xc0016e90e0) Create stream I0525 00:06:21.500425 7 log.go:172] (0xc002ceba20) (0xc0016e90e0) Stream added, broadcasting: 1 I0525 00:06:21.502840 7 log.go:172] (0xc002ceba20) Reply frame received for 1 I0525 00:06:21.502878 7 log.go:172] (0xc002ceba20) (0xc0028ea5a0) Create stream I0525 00:06:21.502888 7 log.go:172] (0xc002ceba20) (0xc0028ea5a0) Stream added, broadcasting: 3 I0525 00:06:21.503769 7 log.go:172] (0xc002ceba20) Reply frame received for 3 I0525 00:06:21.503796 7 log.go:172] (0xc002ceba20) (0xc001c4ad20) Create stream I0525 00:06:21.503806 7 log.go:172] (0xc002ceba20) (0xc001c4ad20) Stream added, broadcasting: 5 I0525 00:06:21.504749 7 log.go:172] (0xc002ceba20) Reply frame received for 5 I0525 00:06:21.593693 7 log.go:172] (0xc002ceba20) Data frame received for 5 I0525 00:06:21.593739 7 log.go:172] (0xc001c4ad20) (5) Data frame handling I0525 00:06:21.593777 7 log.go:172] (0xc002ceba20) Data frame received for 3 I0525 00:06:21.593816 7 log.go:172] (0xc0028ea5a0) (3) Data frame handling I0525 00:06:21.595098 7 log.go:172] (0xc002ceba20) Data frame received for 1 I0525 00:06:21.595112 7 log.go:172] (0xc0016e90e0) (1) Data frame handling I0525 00:06:21.595126 7 log.go:172] (0xc0016e90e0) (1) Data frame sent I0525 00:06:21.595141 7 log.go:172] (0xc002ceba20) (0xc0016e90e0) Stream removed, broadcasting: 1 I0525 00:06:21.595253 7 log.go:172] (0xc002ceba20) (0xc0016e90e0) Stream removed, broadcasting: 1 I0525 00:06:21.595273 7 log.go:172] (0xc002ceba20) (0xc0028ea5a0) Stream removed, broadcasting: 3 I0525 00:06:21.595284 7 log.go:172] (0xc002ceba20) (0xc001c4ad20) Stream removed, broadcasting: 5 May 25 00:06:21.595: INFO: Exec stderr: "" I0525 00:06:21.595304 7 log.go:172] (0xc002ceba20) Go away received May 25 00:06:21.595: INFO: Pod exec output: STEP: Waiting for container to stop restarting May 25 00:06:49.602: INFO: Container has restart count: 2 May 25 00:07:51.603: INFO: Container restart has stabilized STEP: test for subpath mounted with old value May 25 00:07:51.642: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-5388 PodName:var-expansion-f4f503e8-0e8a-44d0-bd87-3c7351b909a2 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:07:51.642: INFO: >>> kubeConfig: /root/.kube/config I0525 00:07:51.674438 7 log.go:172] (0xc0026c1d90) (0xc000ccbcc0) Create stream I0525 00:07:51.674466 7 log.go:172] (0xc0026c1d90) (0xc000ccbcc0) Stream added, broadcasting: 1 I0525 00:07:51.675931 7 log.go:172] (0xc0026c1d90) Reply frame received for 1 I0525 00:07:51.675968 7 log.go:172] (0xc0026c1d90) (0xc001530a00) Create stream I0525 00:07:51.675977 7 log.go:172] (0xc0026c1d90) (0xc001530a00) Stream added, broadcasting: 3 I0525 00:07:51.676848 7 log.go:172] (0xc0026c1d90) Reply frame received for 3 I0525 00:07:51.676877 7 log.go:172] (0xc0026c1d90) (0xc00125a6e0) Create stream I0525 00:07:51.676887 7 log.go:172] (0xc0026c1d90) (0xc00125a6e0) Stream added, broadcasting: 5 I0525 00:07:51.677937 7 log.go:172] (0xc0026c1d90) Reply frame received for 5 I0525 00:07:51.764961 7 log.go:172] (0xc0026c1d90) Data frame received for 5 I0525 00:07:51.765022 7 log.go:172] (0xc00125a6e0) (5) Data frame handling I0525 00:07:51.765063 7 log.go:172] (0xc0026c1d90) Data frame received for 3 I0525 00:07:51.765081 7 log.go:172] (0xc001530a00) (3) Data frame handling I0525 00:07:51.766714 7 log.go:172] (0xc0026c1d90) Data frame received for 1 I0525 00:07:51.766748 7 log.go:172] (0xc000ccbcc0) (1) Data frame handling I0525 00:07:51.766789 7 log.go:172] (0xc000ccbcc0) (1) Data frame sent I0525 00:07:51.766821 7 log.go:172] (0xc0026c1d90) (0xc000ccbcc0) Stream removed, broadcasting: 1 I0525 00:07:51.766850 7 log.go:172] (0xc0026c1d90) Go away received I0525 00:07:51.766952 7 log.go:172] (0xc0026c1d90) (0xc000ccbcc0) Stream removed, broadcasting: 1 I0525 00:07:51.766986 7 log.go:172] (0xc0026c1d90) (0xc001530a00) Stream removed, broadcasting: 3 I0525 00:07:51.767002 7 log.go:172] (0xc0026c1d90) (0xc00125a6e0) Stream removed, broadcasting: 5 May 25 00:07:51.771: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-5388 PodName:var-expansion-f4f503e8-0e8a-44d0-bd87-3c7351b909a2 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:07:51.771: INFO: >>> kubeConfig: /root/.kube/config I0525 00:07:51.803668 7 log.go:172] (0xc002ceb550) (0xc00134e3c0) Create stream I0525 00:07:51.803696 7 log.go:172] (0xc002ceb550) (0xc00134e3c0) Stream added, broadcasting: 1 I0525 00:07:51.805823 7 log.go:172] (0xc002ceb550) Reply frame received for 1 I0525 00:07:51.805863 7 log.go:172] (0xc002ceb550) (0xc001530aa0) Create stream I0525 00:07:51.805875 7 log.go:172] (0xc002ceb550) (0xc001530aa0) Stream added, broadcasting: 3 I0525 00:07:51.807002 7 log.go:172] (0xc002ceb550) Reply frame received for 3 I0525 00:07:51.807047 7 log.go:172] (0xc002ceb550) (0xc0028eba40) Create stream I0525 00:07:51.807059 7 log.go:172] (0xc002ceb550) (0xc0028eba40) Stream added, broadcasting: 5 I0525 00:07:51.808166 7 log.go:172] (0xc002ceb550) Reply frame received for 5 I0525 00:07:51.867823 7 log.go:172] (0xc002ceb550) Data frame received for 5 I0525 00:07:51.867851 7 log.go:172] (0xc0028eba40) (5) Data frame handling I0525 00:07:51.867868 7 log.go:172] (0xc002ceb550) Data frame received for 3 I0525 00:07:51.867877 7 log.go:172] (0xc001530aa0) (3) Data frame handling I0525 00:07:51.869667 7 log.go:172] (0xc002ceb550) Data frame received for 1 I0525 00:07:51.869684 7 log.go:172] (0xc00134e3c0) (1) Data frame handling I0525 00:07:51.869698 7 log.go:172] (0xc00134e3c0) (1) Data frame sent I0525 00:07:51.869715 7 log.go:172] (0xc002ceb550) (0xc00134e3c0) Stream removed, broadcasting: 1 I0525 00:07:51.869786 7 log.go:172] (0xc002ceb550) (0xc00134e3c0) Stream removed, broadcasting: 1 I0525 00:07:51.869796 7 log.go:172] (0xc002ceb550) (0xc001530aa0) Stream removed, broadcasting: 3 I0525 00:07:51.869803 7 log.go:172] (0xc002ceb550) (0xc0028eba40) Stream removed, broadcasting: 5 May 25 00:07:51.869: INFO: Deleting pod "var-expansion-f4f503e8-0e8a-44d0-bd87-3c7351b909a2" in namespace "var-expansion-5388" I0525 00:07:51.869882 7 log.go:172] (0xc002ceb550) Go away received May 25 00:07:51.875: INFO: Wait up to 5m0s for pod "var-expansion-f4f503e8-0e8a-44d0-bd87-3c7351b909a2" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:08:35.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5388" for this suite. • [SLOW TEST:183.301 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":288,"completed":74,"skipped":923,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:08:35.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:08:36.046: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"6f1c532e-0814-4bdf-88b5-72e2ab4f42f1", Controller:(*bool)(0xc003908b72), BlockOwnerDeletion:(*bool)(0xc003908b73)}} May 25 00:08:36.096: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f3bd7a56-6d9b-42bb-8a3f-ab7245dacf86", Controller:(*bool)(0xc00393b99a), BlockOwnerDeletion:(*bool)(0xc00393b99b)}} May 25 00:08:36.182: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"95255c84-fd88-49ce-835b-754c4a4e6c7f", Controller:(*bool)(0xc003908d5a), BlockOwnerDeletion:(*bool)(0xc003908d5b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:08:41.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6272" for this suite. • [SLOW TEST:5.347 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":288,"completed":75,"skipped":928,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:08:41.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-5778 STEP: creating replication controller nodeport-test in namespace services-5778 I0525 00:08:41.499849 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-5778, replica count: 2 I0525 00:08:44.550344 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 00:08:47.550617 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 00:08:47.550: INFO: Creating new exec pod May 25 00:08:52.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5778 execpod29p9w -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 25 00:08:52.809: INFO: stderr: "I0525 00:08:52.705301 1791 log.go:172] (0xc00003a210) (0xc00017d5e0) Create stream\nI0525 00:08:52.705358 1791 log.go:172] (0xc00003a210) (0xc00017d5e0) Stream added, broadcasting: 1\nI0525 00:08:52.708551 1791 log.go:172] (0xc00003a210) Reply frame received for 1\nI0525 00:08:52.708600 1791 log.go:172] (0xc00003a210) (0xc0004d4d20) Create stream\nI0525 00:08:52.708622 1791 log.go:172] (0xc00003a210) (0xc0004d4d20) Stream added, broadcasting: 3\nI0525 00:08:52.709980 1791 log.go:172] (0xc00003a210) Reply frame received for 3\nI0525 00:08:52.710038 1791 log.go:172] (0xc00003a210) (0xc0004d5cc0) Create stream\nI0525 00:08:52.710053 1791 log.go:172] (0xc00003a210) (0xc0004d5cc0) Stream added, broadcasting: 5\nI0525 00:08:52.710950 1791 log.go:172] (0xc00003a210) Reply frame received for 5\nI0525 00:08:52.803184 1791 log.go:172] (0xc00003a210) Data frame received for 5\nI0525 00:08:52.803207 1791 log.go:172] (0xc0004d5cc0) (5) Data frame handling\nI0525 00:08:52.803230 1791 log.go:172] (0xc0004d5cc0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0525 00:08:52.803492 1791 log.go:172] (0xc00003a210) Data frame received for 5\nI0525 00:08:52.803515 1791 log.go:172] (0xc0004d5cc0) (5) Data frame handling\nI0525 00:08:52.803530 1791 log.go:172] (0xc0004d5cc0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0525 00:08:52.803678 1791 log.go:172] (0xc00003a210) Data frame received for 3\nI0525 00:08:52.803707 1791 log.go:172] (0xc0004d4d20) (3) Data frame handling\nI0525 00:08:52.803732 1791 log.go:172] (0xc00003a210) Data frame received for 5\nI0525 00:08:52.803750 1791 log.go:172] (0xc0004d5cc0) (5) Data frame handling\nI0525 00:08:52.805398 1791 log.go:172] (0xc00003a210) Data frame received for 1\nI0525 00:08:52.805420 1791 log.go:172] (0xc00017d5e0) (1) Data frame handling\nI0525 00:08:52.805433 1791 log.go:172] (0xc00017d5e0) (1) Data frame sent\nI0525 00:08:52.805441 1791 log.go:172] (0xc00003a210) (0xc00017d5e0) Stream removed, broadcasting: 1\nI0525 00:08:52.805451 1791 log.go:172] (0xc00003a210) Go away received\nI0525 00:08:52.805805 1791 log.go:172] (0xc00003a210) (0xc00017d5e0) Stream removed, broadcasting: 1\nI0525 00:08:52.805820 1791 log.go:172] (0xc00003a210) (0xc0004d4d20) Stream removed, broadcasting: 3\nI0525 00:08:52.805827 1791 log.go:172] (0xc00003a210) (0xc0004d5cc0) Stream removed, broadcasting: 5\n" May 25 00:08:52.810: INFO: stdout: "" May 25 00:08:52.810: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5778 execpod29p9w -- /bin/sh -x -c nc -zv -t -w 2 10.104.205.32 80' May 25 00:08:53.025: INFO: stderr: "I0525 00:08:52.939131 1811 log.go:172] (0xc0009560b0) (0xc000584f00) Create stream\nI0525 00:08:52.939200 1811 log.go:172] (0xc0009560b0) (0xc000584f00) Stream added, broadcasting: 1\nI0525 00:08:52.942575 1811 log.go:172] (0xc0009560b0) Reply frame received for 1\nI0525 00:08:52.942616 1811 log.go:172] (0xc0009560b0) (0xc000250dc0) Create stream\nI0525 00:08:52.942631 1811 log.go:172] (0xc0009560b0) (0xc000250dc0) Stream added, broadcasting: 3\nI0525 00:08:52.943701 1811 log.go:172] (0xc0009560b0) Reply frame received for 3\nI0525 00:08:52.943735 1811 log.go:172] (0xc0009560b0) (0xc0001399a0) Create stream\nI0525 00:08:52.943745 1811 log.go:172] (0xc0009560b0) (0xc0001399a0) Stream added, broadcasting: 5\nI0525 00:08:52.944731 1811 log.go:172] (0xc0009560b0) Reply frame received for 5\nI0525 00:08:53.019640 1811 log.go:172] (0xc0009560b0) Data frame received for 5\nI0525 00:08:53.019671 1811 log.go:172] (0xc0001399a0) (5) Data frame handling\nI0525 00:08:53.019683 1811 log.go:172] (0xc0001399a0) (5) Data frame sent\nI0525 00:08:53.019689 1811 log.go:172] (0xc0009560b0) Data frame received for 5\nI0525 00:08:53.019694 1811 log.go:172] (0xc0001399a0) (5) Data frame handling\nI0525 00:08:53.019704 1811 log.go:172] (0xc0009560b0) Data frame received for 3\nI0525 00:08:53.019712 1811 log.go:172] (0xc000250dc0) (3) Data frame handling\n+ nc -zv -t -w 2 10.104.205.32 80\nConnection to 10.104.205.32 80 port [tcp/http] succeeded!\nI0525 00:08:53.020945 1811 log.go:172] (0xc0009560b0) Data frame received for 1\nI0525 00:08:53.020965 1811 log.go:172] (0xc000584f00) (1) Data frame handling\nI0525 00:08:53.020975 1811 log.go:172] (0xc000584f00) (1) Data frame sent\nI0525 00:08:53.020987 1811 log.go:172] (0xc0009560b0) (0xc000584f00) Stream removed, broadcasting: 1\nI0525 00:08:53.020998 1811 log.go:172] (0xc0009560b0) Go away received\nI0525 00:08:53.021476 1811 log.go:172] (0xc0009560b0) (0xc000584f00) Stream removed, broadcasting: 1\nI0525 00:08:53.021491 1811 log.go:172] (0xc0009560b0) (0xc000250dc0) Stream removed, broadcasting: 3\nI0525 00:08:53.021498 1811 log.go:172] (0xc0009560b0) (0xc0001399a0) Stream removed, broadcasting: 5\n" May 25 00:08:53.025: INFO: stdout: "" May 25 00:08:53.025: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5778 execpod29p9w -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30850' May 25 00:08:53.224: INFO: stderr: "I0525 00:08:53.150203 1832 log.go:172] (0xc00094c160) (0xc000338780) Create stream\nI0525 00:08:53.150251 1832 log.go:172] (0xc00094c160) (0xc000338780) Stream added, broadcasting: 1\nI0525 00:08:53.156220 1832 log.go:172] (0xc00094c160) Reply frame received for 1\nI0525 00:08:53.156262 1832 log.go:172] (0xc00094c160) (0xc0001ec000) Create stream\nI0525 00:08:53.156275 1832 log.go:172] (0xc00094c160) (0xc0001ec000) Stream added, broadcasting: 3\nI0525 00:08:53.157711 1832 log.go:172] (0xc00094c160) Reply frame received for 3\nI0525 00:08:53.157746 1832 log.go:172] (0xc00094c160) (0xc0001ecf00) Create stream\nI0525 00:08:53.157759 1832 log.go:172] (0xc00094c160) (0xc0001ecf00) Stream added, broadcasting: 5\nI0525 00:08:53.158658 1832 log.go:172] (0xc00094c160) Reply frame received for 5\nI0525 00:08:53.218733 1832 log.go:172] (0xc00094c160) Data frame received for 3\nI0525 00:08:53.218769 1832 log.go:172] (0xc0001ec000) (3) Data frame handling\nI0525 00:08:53.218791 1832 log.go:172] (0xc00094c160) Data frame received for 5\nI0525 00:08:53.218815 1832 log.go:172] (0xc0001ecf00) (5) Data frame handling\nI0525 00:08:53.218838 1832 log.go:172] (0xc0001ecf00) (5) Data frame sent\nI0525 00:08:53.218849 1832 log.go:172] (0xc00094c160) Data frame received for 5\nI0525 00:08:53.218857 1832 log.go:172] (0xc0001ecf00) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30850\nConnection to 172.17.0.13 30850 port [tcp/30850] succeeded!\nI0525 00:08:53.219948 1832 log.go:172] (0xc00094c160) Data frame received for 1\nI0525 00:08:53.219969 1832 log.go:172] (0xc000338780) (1) Data frame handling\nI0525 00:08:53.219982 1832 log.go:172] (0xc000338780) (1) Data frame sent\nI0525 00:08:53.219994 1832 log.go:172] (0xc00094c160) (0xc000338780) Stream removed, broadcasting: 1\nI0525 00:08:53.220006 1832 log.go:172] (0xc00094c160) Go away received\nI0525 00:08:53.220361 1832 log.go:172] (0xc00094c160) (0xc000338780) Stream removed, broadcasting: 1\nI0525 00:08:53.220376 1832 log.go:172] (0xc00094c160) (0xc0001ec000) Stream removed, broadcasting: 3\nI0525 00:08:53.220384 1832 log.go:172] (0xc00094c160) (0xc0001ecf00) Stream removed, broadcasting: 5\n" May 25 00:08:53.224: INFO: stdout: "" May 25 00:08:53.224: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5778 execpod29p9w -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30850' May 25 00:08:53.426: INFO: stderr: "I0525 00:08:53.361571 1855 log.go:172] (0xc000ad8000) (0xc00044ae60) Create stream\nI0525 00:08:53.361647 1855 log.go:172] (0xc000ad8000) (0xc00044ae60) Stream added, broadcasting: 1\nI0525 00:08:53.365875 1855 log.go:172] (0xc000ad8000) Reply frame received for 1\nI0525 00:08:53.365939 1855 log.go:172] (0xc000ad8000) (0xc0004305a0) Create stream\nI0525 00:08:53.365961 1855 log.go:172] (0xc000ad8000) (0xc0004305a0) Stream added, broadcasting: 3\nI0525 00:08:53.367230 1855 log.go:172] (0xc000ad8000) Reply frame received for 3\nI0525 00:08:53.367280 1855 log.go:172] (0xc000ad8000) (0xc0004312c0) Create stream\nI0525 00:08:53.367300 1855 log.go:172] (0xc000ad8000) (0xc0004312c0) Stream added, broadcasting: 5\nI0525 00:08:53.368395 1855 log.go:172] (0xc000ad8000) Reply frame received for 5\nI0525 00:08:53.420277 1855 log.go:172] (0xc000ad8000) Data frame received for 3\nI0525 00:08:53.420309 1855 log.go:172] (0xc0004305a0) (3) Data frame handling\nI0525 00:08:53.420338 1855 log.go:172] (0xc000ad8000) Data frame received for 5\nI0525 00:08:53.420345 1855 log.go:172] (0xc0004312c0) (5) Data frame handling\nI0525 00:08:53.420354 1855 log.go:172] (0xc0004312c0) (5) Data frame sent\nI0525 00:08:53.420359 1855 log.go:172] (0xc000ad8000) Data frame received for 5\nI0525 00:08:53.420365 1855 log.go:172] (0xc0004312c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30850\nConnection to 172.17.0.12 30850 port [tcp/30850] succeeded!\nI0525 00:08:53.421596 1855 log.go:172] (0xc000ad8000) Data frame received for 1\nI0525 00:08:53.421619 1855 log.go:172] (0xc00044ae60) (1) Data frame handling\nI0525 00:08:53.421634 1855 log.go:172] (0xc00044ae60) (1) Data frame sent\nI0525 00:08:53.421651 1855 log.go:172] (0xc000ad8000) (0xc00044ae60) Stream removed, broadcasting: 1\nI0525 00:08:53.421917 1855 log.go:172] (0xc000ad8000) Go away received\nI0525 00:08:53.421988 1855 log.go:172] (0xc000ad8000) (0xc00044ae60) Stream removed, broadcasting: 1\nI0525 00:08:53.422004 1855 log.go:172] (0xc000ad8000) (0xc0004305a0) Stream removed, broadcasting: 3\nI0525 00:08:53.422013 1855 log.go:172] (0xc000ad8000) (0xc0004312c0) Stream removed, broadcasting: 5\n" May 25 00:08:53.426: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:08:53.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5778" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.168 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":288,"completed":76,"skipped":942,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:08:53.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 25 00:08:53.555: INFO: Waiting up to 5m0s for pod "pod-889af429-2386-4c3d-b190-9f22e334e942" in namespace "emptydir-5296" to be "Succeeded or Failed" May 25 00:08:53.558: INFO: Pod "pod-889af429-2386-4c3d-b190-9f22e334e942": Phase="Pending", Reason="", readiness=false. Elapsed: 3.131766ms May 25 00:08:55.563: INFO: Pod "pod-889af429-2386-4c3d-b190-9f22e334e942": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007812001s May 25 00:08:57.567: INFO: Pod "pod-889af429-2386-4c3d-b190-9f22e334e942": Phase="Running", Reason="", readiness=true. Elapsed: 4.011734704s May 25 00:08:59.571: INFO: Pod "pod-889af429-2386-4c3d-b190-9f22e334e942": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016500695s STEP: Saw pod success May 25 00:08:59.571: INFO: Pod "pod-889af429-2386-4c3d-b190-9f22e334e942" satisfied condition "Succeeded or Failed" May 25 00:08:59.575: INFO: Trying to get logs from node latest-worker pod pod-889af429-2386-4c3d-b190-9f22e334e942 container test-container: STEP: delete the pod May 25 00:08:59.735: INFO: Waiting for pod pod-889af429-2386-4c3d-b190-9f22e334e942 to disappear May 25 00:08:59.822: INFO: Pod pod-889af429-2386-4c3d-b190-9f22e334e942 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:08:59.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5296" for this suite. • [SLOW TEST:6.407 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":77,"skipped":964,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:08:59.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6592 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6592 STEP: creating replication controller externalsvc in namespace services-6592 I0525 00:09:00.097594 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6592, replica count: 2 I0525 00:09:03.148054 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 00:09:06.148288 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 25 00:09:06.210: INFO: Creating new exec pod May 25 00:09:10.247: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6592 execpoddhlmw -- /bin/sh -x -c nslookup clusterip-service' May 25 00:09:10.479: INFO: stderr: "I0525 00:09:10.373614 1877 log.go:172] (0xc00003ad10) (0xc000309220) Create stream\nI0525 00:09:10.373665 1877 log.go:172] (0xc00003ad10) (0xc000309220) Stream added, broadcasting: 1\nI0525 00:09:10.376168 1877 log.go:172] (0xc00003ad10) Reply frame received for 1\nI0525 00:09:10.376328 1877 log.go:172] (0xc00003ad10) (0xc0000e0e60) Create stream\nI0525 00:09:10.376346 1877 log.go:172] (0xc00003ad10) (0xc0000e0e60) Stream added, broadcasting: 3\nI0525 00:09:10.377762 1877 log.go:172] (0xc00003ad10) Reply frame received for 3\nI0525 00:09:10.377826 1877 log.go:172] (0xc00003ad10) (0xc000309ae0) Create stream\nI0525 00:09:10.377842 1877 log.go:172] (0xc00003ad10) (0xc000309ae0) Stream added, broadcasting: 5\nI0525 00:09:10.378893 1877 log.go:172] (0xc00003ad10) Reply frame received for 5\nI0525 00:09:10.462159 1877 log.go:172] (0xc00003ad10) Data frame received for 5\nI0525 00:09:10.462189 1877 log.go:172] (0xc000309ae0) (5) Data frame handling\nI0525 00:09:10.462209 1877 log.go:172] (0xc000309ae0) (5) Data frame sent\n+ nslookup clusterip-service\nI0525 00:09:10.471984 1877 log.go:172] (0xc00003ad10) Data frame received for 3\nI0525 00:09:10.472010 1877 log.go:172] (0xc0000e0e60) (3) Data frame handling\nI0525 00:09:10.472036 1877 log.go:172] (0xc0000e0e60) (3) Data frame sent\nI0525 00:09:10.472793 1877 log.go:172] (0xc00003ad10) Data frame received for 3\nI0525 00:09:10.472813 1877 log.go:172] (0xc0000e0e60) (3) Data frame handling\nI0525 00:09:10.472829 1877 log.go:172] (0xc0000e0e60) (3) Data frame sent\nI0525 00:09:10.473656 1877 log.go:172] (0xc00003ad10) Data frame received for 5\nI0525 00:09:10.473677 1877 log.go:172] (0xc000309ae0) (5) Data frame handling\nI0525 00:09:10.473935 1877 log.go:172] (0xc00003ad10) Data frame received for 3\nI0525 00:09:10.473957 1877 log.go:172] (0xc0000e0e60) (3) Data frame handling\nI0525 00:09:10.475202 1877 log.go:172] (0xc00003ad10) Data frame received for 1\nI0525 00:09:10.475218 1877 log.go:172] (0xc000309220) (1) Data frame handling\nI0525 00:09:10.475228 1877 log.go:172] (0xc000309220) (1) Data frame sent\nI0525 00:09:10.475242 1877 log.go:172] (0xc00003ad10) (0xc000309220) Stream removed, broadcasting: 1\nI0525 00:09:10.475258 1877 log.go:172] (0xc00003ad10) Go away received\nI0525 00:09:10.475674 1877 log.go:172] (0xc00003ad10) (0xc000309220) Stream removed, broadcasting: 1\nI0525 00:09:10.475694 1877 log.go:172] (0xc00003ad10) (0xc0000e0e60) Stream removed, broadcasting: 3\nI0525 00:09:10.475704 1877 log.go:172] (0xc00003ad10) (0xc000309ae0) Stream removed, broadcasting: 5\n" May 25 00:09:10.479: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6592.svc.cluster.local\tcanonical name = externalsvc.services-6592.svc.cluster.local.\nName:\texternalsvc.services-6592.svc.cluster.local\nAddress: 10.98.152.25\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6592, will wait for the garbage collector to delete the pods May 25 00:09:10.539: INFO: Deleting ReplicationController externalsvc took: 6.638713ms May 25 00:09:10.840: INFO: Terminating ReplicationController externalsvc pods took: 300.248903ms May 25 00:09:25.037: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:09:25.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6592" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:25.288 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":288,"completed":78,"skipped":989,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:09:25.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3838 May 25 00:09:29.208: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3838 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 25 00:09:29.452: INFO: stderr: "I0525 00:09:29.346526 1899 log.go:172] (0xc0005e9b80) (0xc0005092c0) Create stream\nI0525 00:09:29.346601 1899 log.go:172] (0xc0005e9b80) (0xc0005092c0) Stream added, broadcasting: 1\nI0525 00:09:29.348909 1899 log.go:172] (0xc0005e9b80) Reply frame received for 1\nI0525 00:09:29.348954 1899 log.go:172] (0xc0005e9b80) (0xc000290e60) Create stream\nI0525 00:09:29.348966 1899 log.go:172] (0xc0005e9b80) (0xc000290e60) Stream added, broadcasting: 3\nI0525 00:09:29.350216 1899 log.go:172] (0xc0005e9b80) Reply frame received for 3\nI0525 00:09:29.350264 1899 log.go:172] (0xc0005e9b80) (0xc00067ea00) Create stream\nI0525 00:09:29.350282 1899 log.go:172] (0xc0005e9b80) (0xc00067ea00) Stream added, broadcasting: 5\nI0525 00:09:29.351193 1899 log.go:172] (0xc0005e9b80) Reply frame received for 5\nI0525 00:09:29.440913 1899 log.go:172] (0xc0005e9b80) Data frame received for 5\nI0525 00:09:29.440952 1899 log.go:172] (0xc00067ea00) (5) Data frame handling\nI0525 00:09:29.440973 1899 log.go:172] (0xc00067ea00) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0525 00:09:29.444174 1899 log.go:172] (0xc0005e9b80) Data frame received for 3\nI0525 00:09:29.444206 1899 log.go:172] (0xc000290e60) (3) Data frame handling\nI0525 00:09:29.444238 1899 log.go:172] (0xc000290e60) (3) Data frame sent\nI0525 00:09:29.444475 1899 log.go:172] (0xc0005e9b80) Data frame received for 3\nI0525 00:09:29.444495 1899 log.go:172] (0xc000290e60) (3) Data frame handling\nI0525 00:09:29.444529 1899 log.go:172] (0xc0005e9b80) Data frame received for 5\nI0525 00:09:29.444556 1899 log.go:172] (0xc00067ea00) (5) Data frame handling\nI0525 00:09:29.446225 1899 log.go:172] (0xc0005e9b80) Data frame received for 1\nI0525 00:09:29.446242 1899 log.go:172] (0xc0005092c0) (1) Data frame handling\nI0525 00:09:29.446251 1899 log.go:172] (0xc0005092c0) (1) Data frame sent\nI0525 00:09:29.446263 1899 log.go:172] (0xc0005e9b80) (0xc0005092c0) Stream removed, broadcasting: 1\nI0525 00:09:29.446337 1899 log.go:172] (0xc0005e9b80) Go away received\nI0525 00:09:29.446588 1899 log.go:172] (0xc0005e9b80) (0xc0005092c0) Stream removed, broadcasting: 1\nI0525 00:09:29.446606 1899 log.go:172] (0xc0005e9b80) (0xc000290e60) Stream removed, broadcasting: 3\nI0525 00:09:29.446614 1899 log.go:172] (0xc0005e9b80) (0xc00067ea00) Stream removed, broadcasting: 5\n" May 25 00:09:29.452: INFO: stdout: "iptables" May 25 00:09:29.452: INFO: proxyMode: iptables May 25 00:09:29.457: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 25 00:09:29.505: INFO: Pod kube-proxy-mode-detector still exists May 25 00:09:31.505: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 25 00:09:31.509: INFO: Pod kube-proxy-mode-detector still exists May 25 00:09:33.505: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 25 00:09:33.508: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-3838 STEP: creating replication controller affinity-clusterip-timeout in namespace services-3838 I0525 00:09:33.601361 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-3838, replica count: 3 I0525 00:09:36.651774 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 00:09:39.652063 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 00:09:39.659: INFO: Creating new exec pod May 25 00:09:44.725: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3838 execpod-affinityg5qss -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 25 00:09:44.979: INFO: stderr: "I0525 00:09:44.883478 1922 log.go:172] (0xc000b44420) (0xc000020e60) Create stream\nI0525 00:09:44.883533 1922 log.go:172] (0xc000b44420) (0xc000020e60) Stream added, broadcasting: 1\nI0525 00:09:44.886140 1922 log.go:172] (0xc000b44420) Reply frame received for 1\nI0525 00:09:44.886170 1922 log.go:172] (0xc000b44420) (0xc00002c000) Create stream\nI0525 00:09:44.886180 1922 log.go:172] (0xc000b44420) (0xc00002c000) Stream added, broadcasting: 3\nI0525 00:09:44.886928 1922 log.go:172] (0xc000b44420) Reply frame received for 3\nI0525 00:09:44.886971 1922 log.go:172] (0xc000b44420) (0xc000021400) Create stream\nI0525 00:09:44.886992 1922 log.go:172] (0xc000b44420) (0xc000021400) Stream added, broadcasting: 5\nI0525 00:09:44.887841 1922 log.go:172] (0xc000b44420) Reply frame received for 5\nI0525 00:09:44.970842 1922 log.go:172] (0xc000b44420) Data frame received for 5\nI0525 00:09:44.970862 1922 log.go:172] (0xc000021400) (5) Data frame handling\nI0525 00:09:44.970871 1922 log.go:172] (0xc000021400) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0525 00:09:44.971243 1922 log.go:172] (0xc000b44420) Data frame received for 5\nI0525 00:09:44.971257 1922 log.go:172] (0xc000021400) (5) Data frame handling\nI0525 00:09:44.971273 1922 log.go:172] (0xc000021400) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0525 00:09:44.971722 1922 log.go:172] (0xc000b44420) Data frame received for 3\nI0525 00:09:44.971732 1922 log.go:172] (0xc00002c000) (3) Data frame handling\nI0525 00:09:44.972048 1922 log.go:172] (0xc000b44420) Data frame received for 5\nI0525 00:09:44.972058 1922 log.go:172] (0xc000021400) (5) Data frame handling\nI0525 00:09:44.973776 1922 log.go:172] (0xc000b44420) Data frame received for 1\nI0525 00:09:44.973798 1922 log.go:172] (0xc000020e60) (1) Data frame handling\nI0525 00:09:44.973815 1922 log.go:172] (0xc000020e60) (1) Data frame sent\nI0525 00:09:44.973831 1922 log.go:172] (0xc000b44420) (0xc000020e60) Stream removed, broadcasting: 1\nI0525 00:09:44.973845 1922 log.go:172] (0xc000b44420) Go away received\nI0525 00:09:44.974865 1922 log.go:172] (0xc000b44420) (0xc000020e60) Stream removed, broadcasting: 1\nI0525 00:09:44.974948 1922 log.go:172] (0xc000b44420) (0xc00002c000) Stream removed, broadcasting: 3\nI0525 00:09:44.975027 1922 log.go:172] (0xc000b44420) (0xc000021400) Stream removed, broadcasting: 5\n" May 25 00:09:44.979: INFO: stdout: "" May 25 00:09:44.980: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3838 execpod-affinityg5qss -- /bin/sh -x -c nc -zv -t -w 2 10.102.41.158 80' May 25 00:09:45.206: INFO: stderr: "I0525 00:09:45.112949 1944 log.go:172] (0xc00003a790) (0xc00026b040) Create stream\nI0525 00:09:45.113012 1944 log.go:172] (0xc00003a790) (0xc00026b040) Stream added, broadcasting: 1\nI0525 00:09:45.115702 1944 log.go:172] (0xc00003a790) Reply frame received for 1\nI0525 00:09:45.115743 1944 log.go:172] (0xc00003a790) (0xc000652be0) Create stream\nI0525 00:09:45.115756 1944 log.go:172] (0xc00003a790) (0xc000652be0) Stream added, broadcasting: 3\nI0525 00:09:45.116610 1944 log.go:172] (0xc00003a790) Reply frame received for 3\nI0525 00:09:45.116648 1944 log.go:172] (0xc00003a790) (0xc0004fc3c0) Create stream\nI0525 00:09:45.116656 1944 log.go:172] (0xc00003a790) (0xc0004fc3c0) Stream added, broadcasting: 5\nI0525 00:09:45.117666 1944 log.go:172] (0xc00003a790) Reply frame received for 5\nI0525 00:09:45.198843 1944 log.go:172] (0xc00003a790) Data frame received for 3\nI0525 00:09:45.198886 1944 log.go:172] (0xc000652be0) (3) Data frame handling\nI0525 00:09:45.198918 1944 log.go:172] (0xc00003a790) Data frame received for 5\nI0525 00:09:45.198934 1944 log.go:172] (0xc0004fc3c0) (5) Data frame handling\nI0525 00:09:45.198952 1944 log.go:172] (0xc0004fc3c0) (5) Data frame sent\nI0525 00:09:45.198981 1944 log.go:172] (0xc00003a790) Data frame received for 5\nI0525 00:09:45.198994 1944 log.go:172] (0xc0004fc3c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.41.158 80\nConnection to 10.102.41.158 80 port [tcp/http] succeeded!\nI0525 00:09:45.200359 1944 log.go:172] (0xc00003a790) Data frame received for 1\nI0525 00:09:45.200386 1944 log.go:172] (0xc00026b040) (1) Data frame handling\nI0525 00:09:45.200405 1944 log.go:172] (0xc00026b040) (1) Data frame sent\nI0525 00:09:45.200422 1944 log.go:172] (0xc00003a790) (0xc00026b040) Stream removed, broadcasting: 1\nI0525 00:09:45.200692 1944 log.go:172] (0xc00003a790) Go away received\nI0525 00:09:45.200818 1944 log.go:172] (0xc00003a790) (0xc00026b040) Stream removed, broadcasting: 1\nI0525 00:09:45.200840 1944 log.go:172] (0xc00003a790) (0xc000652be0) Stream removed, broadcasting: 3\nI0525 00:09:45.200851 1944 log.go:172] (0xc00003a790) (0xc0004fc3c0) Stream removed, broadcasting: 5\n" May 25 00:09:45.206: INFO: stdout: "" May 25 00:09:45.206: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3838 execpod-affinityg5qss -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.41.158:80/ ; done' May 25 00:09:45.478: INFO: stderr: "I0525 00:09:45.339792 1966 log.go:172] (0xc000a156b0) (0xc0006ccfa0) Create stream\nI0525 00:09:45.339871 1966 log.go:172] (0xc000a156b0) (0xc0006ccfa0) Stream added, broadcasting: 1\nI0525 00:09:45.345858 1966 log.go:172] (0xc000a156b0) Reply frame received for 1\nI0525 00:09:45.345911 1966 log.go:172] (0xc000a156b0) (0xc0006daf00) Create stream\nI0525 00:09:45.345929 1966 log.go:172] (0xc000a156b0) (0xc0006daf00) Stream added, broadcasting: 3\nI0525 00:09:45.347059 1966 log.go:172] (0xc000a156b0) Reply frame received for 3\nI0525 00:09:45.347106 1966 log.go:172] (0xc000a156b0) (0xc000685c20) Create stream\nI0525 00:09:45.347123 1966 log.go:172] (0xc000a156b0) (0xc000685c20) Stream added, broadcasting: 5\nI0525 00:09:45.348184 1966 log.go:172] (0xc000a156b0) Reply frame received for 5\nI0525 00:09:45.389766 1966 log.go:172] (0xc000a156b0) Data frame received for 5\nI0525 00:09:45.389795 1966 log.go:172] (0xc000685c20) (5) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.158:80/\nI0525 00:09:45.389821 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.389865 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.389876 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.389900 1966 log.go:172] (0xc000685c20) (5) Data frame sent\nI0525 00:09:45.395233 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.395248 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.395259 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.396324 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.396350 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.396360 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.396374 1966 log.go:172] (0xc000a156b0) Data frame received for 5\nI0525 00:09:45.396390 1966 log.go:172] (0xc000685c20) (5) Data frame handling\nI0525 00:09:45.396403 1966 log.go:172] (0xc000685c20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.158:80/\nI0525 00:09:45.400436 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.400453 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.400468 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.400791 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.400803 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.400813 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.400847 1966 log.go:172] (0xc000a156b0) Data frame received for 5\nI0525 00:09:45.400863 1966 log.go:172] (0xc000685c20) (5) Data frame handling\nI0525 00:09:45.400877 1966 log.go:172] (0xc000685c20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.158:80/\nI0525 00:09:45.404763 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.404777 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.404785 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.405659 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.405697 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.405717 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.405746 1966 log.go:172] (0xc000a156b0) Data frame received for 5\nI0525 00:09:45.405763 1966 log.go:172] (0xc000685c20) (5) Data frame handling\nI0525 00:09:45.405786 1966 log.go:172] (0xc000685c20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.158:80/\nI0525 00:09:45.409752 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.409778 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.409911 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.410363 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.410379 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.410394 1966 log.go:172] (0xc000a156b0) Data frame received for 5\nI0525 00:09:45.410418 1966 log.go:172] (0xc000685c20) (5) Data frame handling\nI0525 00:09:45.410433 1966 log.go:172] (0xc000685c20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.158:80/\nI0525 00:09:45.410451 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.413885 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.413903 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.413912 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.414278 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.414310 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.414328 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.414355 1966 log.go:172] (0xc000a156b0) Data frame received for 5\nI0525 00:09:45.414372 1966 log.go:172] (0xc000685c20) (5) Data frame handling\nI0525 00:09:45.414395 1966 log.go:172] (0xc000685c20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.158:80/\nI0525 00:09:45.420046 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.420057 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.420064 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.420541 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.420554 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.420563 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.420574 1966 log.go:172] (0xc000a156b0) Data frame received for 5\nI0525 00:09:45.420579 1966 log.go:172] (0xc000685c20) (5) Data frame handling\nI0525 00:09:45.420585 1966 log.go:172] (0xc000685c20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.158:80/\nI0525 00:09:45.424627 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.424648 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.424666 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.425527 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.425564 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.425584 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.425611 1966 log.go:172] (0xc000a156b0) Data frame received for 5\nI0525 00:09:45.425628 1966 log.go:172] (0xc000685c20) (5) Data frame handling\nI0525 00:09:45.425649 1966 log.go:172] (0xc000685c20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.158:80/\nI0525 00:09:45.430235 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.430260 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.430282 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.430944 1966 log.go:172] (0xc000a156b0) Data frame received for 5\nI0525 00:09:45.430961 1966 log.go:172] (0xc000685c20) (5) Data frame handling\nI0525 00:09:45.430970 1966 log.go:172] (0xc000685c20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.158:80/\nI0525 00:09:45.430982 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.430988 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.430994 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.435615 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.435641 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.435675 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.436375 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.436391 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.436400 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.436413 1966 log.go:172] (0xc000a156b0) Data frame received for 5\nI0525 00:09:45.436419 1966 log.go:172] (0xc000685c20) (5) Data frame handling\nI0525 00:09:45.436431 1966 log.go:172] (0xc000685c20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.158:80/\nI0525 00:09:45.441282 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.441296 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.441306 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.441943 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.441958 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.441967 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.441975 1966 log.go:172] (0xc000a156b0) Data frame received for 5\nI0525 00:09:45.441984 1966 log.go:172] (0xc000685c20) (5) Data frame handling\nI0525 00:09:45.441990 1966 log.go:172] (0xc000685c20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.158:80/\nI0525 00:09:45.445344 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.445366 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.445389 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.445705 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.445719 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.445731 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.445747 1966 log.go:172] (0xc000a156b0) Data frame received for 5\nI0525 00:09:45.445763 1966 log.go:172] (0xc000685c20) (5) Data frame handling\nI0525 00:09:45.445775 1966 log.go:172] (0xc000685c20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.158:80/\nI0525 00:09:45.450228 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.450252 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.450275 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.450700 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.450718 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.450802 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.450841 1966 log.go:172] (0xc000a156b0) Data frame received for 5\nI0525 00:09:45.450856 1966 log.go:172] (0xc000685c20) (5) Data frame handling\nI0525 00:09:45.450869 1966 log.go:172] (0xc000685c20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.158:80/\nI0525 00:09:45.455029 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.455060 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.455140 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.455798 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.455818 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.455829 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.455845 1966 log.go:172] (0xc000a156b0) Data frame received for 5\nI0525 00:09:45.455855 1966 log.go:172] (0xc000685c20) (5) Data frame handling\nI0525 00:09:45.455864 1966 log.go:172] (0xc000685c20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.158:80/\nI0525 00:09:45.459193 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.459204 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.459210 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.459666 1966 log.go:172] (0xc000a156b0) Data frame received for 5\nI0525 00:09:45.459677 1966 log.go:172] (0xc000685c20) (5) Data frame handling\nI0525 00:09:45.459683 1966 log.go:172] (0xc000685c20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.158:80/\nI0525 00:09:45.459707 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.459747 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.459772 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.465059 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.465091 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.465401 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.465699 1966 log.go:172] (0xc000a156b0) Data frame received for 5\nI0525 00:09:45.465719 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.465735 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.465743 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.465755 1966 log.go:172] (0xc000685c20) (5) Data frame handling\nI0525 00:09:45.465762 1966 log.go:172] (0xc000685c20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.158:80/\nI0525 00:09:45.470937 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.470979 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.471016 1966 log.go:172] (0xc0006daf00) (3) Data frame sent\nI0525 00:09:45.471868 1966 log.go:172] (0xc000a156b0) Data frame received for 3\nI0525 00:09:45.471890 1966 log.go:172] (0xc0006daf00) (3) Data frame handling\nI0525 00:09:45.471923 1966 log.go:172] (0xc000a156b0) Data frame received for 5\nI0525 00:09:45.471953 1966 log.go:172] (0xc000685c20) (5) Data frame handling\nI0525 00:09:45.474215 1966 log.go:172] (0xc000a156b0) Data frame received for 1\nI0525 00:09:45.474234 1966 log.go:172] (0xc0006ccfa0) (1) Data frame handling\nI0525 00:09:45.474249 1966 log.go:172] (0xc0006ccfa0) (1) Data frame sent\nI0525 00:09:45.474266 1966 log.go:172] (0xc000a156b0) (0xc0006ccfa0) Stream removed, broadcasting: 1\nI0525 00:09:45.474279 1966 log.go:172] (0xc000a156b0) Go away received\nI0525 00:09:45.474739 1966 log.go:172] (0xc000a156b0) (0xc0006ccfa0) Stream removed, broadcasting: 1\nI0525 00:09:45.474761 1966 log.go:172] (0xc000a156b0) (0xc0006daf00) Stream removed, broadcasting: 3\nI0525 00:09:45.474775 1966 log.go:172] (0xc000a156b0) (0xc000685c20) Stream removed, broadcasting: 5\n" May 25 00:09:45.479: INFO: stdout: "\naffinity-clusterip-timeout-vh2cb\naffinity-clusterip-timeout-vh2cb\naffinity-clusterip-timeout-vh2cb\naffinity-clusterip-timeout-vh2cb\naffinity-clusterip-timeout-vh2cb\naffinity-clusterip-timeout-vh2cb\naffinity-clusterip-timeout-vh2cb\naffinity-clusterip-timeout-vh2cb\naffinity-clusterip-timeout-vh2cb\naffinity-clusterip-timeout-vh2cb\naffinity-clusterip-timeout-vh2cb\naffinity-clusterip-timeout-vh2cb\naffinity-clusterip-timeout-vh2cb\naffinity-clusterip-timeout-vh2cb\naffinity-clusterip-timeout-vh2cb\naffinity-clusterip-timeout-vh2cb" May 25 00:09:45.479: INFO: Received response from host: May 25 00:09:45.479: INFO: Received response from host: affinity-clusterip-timeout-vh2cb May 25 00:09:45.479: INFO: Received response from host: affinity-clusterip-timeout-vh2cb May 25 00:09:45.479: INFO: Received response from host: affinity-clusterip-timeout-vh2cb May 25 00:09:45.479: INFO: Received response from host: affinity-clusterip-timeout-vh2cb May 25 00:09:45.479: INFO: Received response from host: affinity-clusterip-timeout-vh2cb May 25 00:09:45.479: INFO: Received response from host: affinity-clusterip-timeout-vh2cb May 25 00:09:45.479: INFO: Received response from host: affinity-clusterip-timeout-vh2cb May 25 00:09:45.479: INFO: Received response from host: affinity-clusterip-timeout-vh2cb May 25 00:09:45.479: INFO: Received response from host: affinity-clusterip-timeout-vh2cb May 25 00:09:45.479: INFO: Received response from host: affinity-clusterip-timeout-vh2cb May 25 00:09:45.479: INFO: Received response from host: affinity-clusterip-timeout-vh2cb May 25 00:09:45.479: INFO: Received response from host: affinity-clusterip-timeout-vh2cb May 25 00:09:45.479: INFO: Received response from host: affinity-clusterip-timeout-vh2cb May 25 00:09:45.479: INFO: Received response from host: affinity-clusterip-timeout-vh2cb May 25 00:09:45.479: INFO: Received response from host: affinity-clusterip-timeout-vh2cb May 25 00:09:45.479: INFO: Received response from host: affinity-clusterip-timeout-vh2cb May 25 00:09:45.479: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3838 execpod-affinityg5qss -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.102.41.158:80/' May 25 00:09:45.691: INFO: stderr: "I0525 00:09:45.615740 1988 log.go:172] (0xc0000e8370) (0xc0003a8e60) Create stream\nI0525 00:09:45.615812 1988 log.go:172] (0xc0000e8370) (0xc0003a8e60) Stream added, broadcasting: 1\nI0525 00:09:45.618040 1988 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0525 00:09:45.618106 1988 log.go:172] (0xc0000e8370) (0xc00062adc0) Create stream\nI0525 00:09:45.618142 1988 log.go:172] (0xc0000e8370) (0xc00062adc0) Stream added, broadcasting: 3\nI0525 00:09:45.619211 1988 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0525 00:09:45.619247 1988 log.go:172] (0xc0000e8370) (0xc00062bd60) Create stream\nI0525 00:09:45.619258 1988 log.go:172] (0xc0000e8370) (0xc00062bd60) Stream added, broadcasting: 5\nI0525 00:09:45.620347 1988 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0525 00:09:45.678606 1988 log.go:172] (0xc0000e8370) Data frame received for 5\nI0525 00:09:45.678633 1988 log.go:172] (0xc00062bd60) (5) Data frame handling\nI0525 00:09:45.678650 1988 log.go:172] (0xc00062bd60) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.102.41.158:80/\nI0525 00:09:45.683625 1988 log.go:172] (0xc0000e8370) Data frame received for 3\nI0525 00:09:45.683647 1988 log.go:172] (0xc00062adc0) (3) Data frame handling\nI0525 00:09:45.683664 1988 log.go:172] (0xc00062adc0) (3) Data frame sent\nI0525 00:09:45.684163 1988 log.go:172] (0xc0000e8370) Data frame received for 3\nI0525 00:09:45.684187 1988 log.go:172] (0xc00062adc0) (3) Data frame handling\nI0525 00:09:45.684328 1988 log.go:172] (0xc0000e8370) Data frame received for 5\nI0525 00:09:45.684342 1988 log.go:172] (0xc00062bd60) (5) Data frame handling\nI0525 00:09:45.686247 1988 log.go:172] (0xc0000e8370) Data frame received for 1\nI0525 00:09:45.686280 1988 log.go:172] (0xc0003a8e60) (1) Data frame handling\nI0525 00:09:45.686304 1988 log.go:172] (0xc0003a8e60) (1) Data frame sent\nI0525 00:09:45.686339 1988 log.go:172] (0xc0000e8370) (0xc0003a8e60) Stream removed, broadcasting: 1\nI0525 00:09:45.686377 1988 log.go:172] (0xc0000e8370) Go away received\nI0525 00:09:45.686851 1988 log.go:172] (0xc0000e8370) (0xc0003a8e60) Stream removed, broadcasting: 1\nI0525 00:09:45.686875 1988 log.go:172] (0xc0000e8370) (0xc00062adc0) Stream removed, broadcasting: 3\nI0525 00:09:45.686885 1988 log.go:172] (0xc0000e8370) (0xc00062bd60) Stream removed, broadcasting: 5\n" May 25 00:09:45.691: INFO: stdout: "affinity-clusterip-timeout-vh2cb" May 25 00:10:00.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3838 execpod-affinityg5qss -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.102.41.158:80/' May 25 00:10:00.932: INFO: stderr: "I0525 00:10:00.826617 2011 log.go:172] (0xc00003a420) (0xc0005ff860) Create stream\nI0525 00:10:00.826682 2011 log.go:172] (0xc00003a420) (0xc0005ff860) Stream added, broadcasting: 1\nI0525 00:10:00.828576 2011 log.go:172] (0xc00003a420) Reply frame received for 1\nI0525 00:10:00.828640 2011 log.go:172] (0xc00003a420) (0xc000592140) Create stream\nI0525 00:10:00.828666 2011 log.go:172] (0xc00003a420) (0xc000592140) Stream added, broadcasting: 3\nI0525 00:10:00.830005 2011 log.go:172] (0xc00003a420) Reply frame received for 3\nI0525 00:10:00.830065 2011 log.go:172] (0xc00003a420) (0xc0005930e0) Create stream\nI0525 00:10:00.830084 2011 log.go:172] (0xc00003a420) (0xc0005930e0) Stream added, broadcasting: 5\nI0525 00:10:00.830916 2011 log.go:172] (0xc00003a420) Reply frame received for 5\nI0525 00:10:00.917022 2011 log.go:172] (0xc00003a420) Data frame received for 5\nI0525 00:10:00.917042 2011 log.go:172] (0xc0005930e0) (5) Data frame handling\nI0525 00:10:00.917053 2011 log.go:172] (0xc0005930e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.102.41.158:80/\nI0525 00:10:00.924063 2011 log.go:172] (0xc00003a420) Data frame received for 3\nI0525 00:10:00.924076 2011 log.go:172] (0xc000592140) (3) Data frame handling\nI0525 00:10:00.924089 2011 log.go:172] (0xc000592140) (3) Data frame sent\nI0525 00:10:00.924780 2011 log.go:172] (0xc00003a420) Data frame received for 3\nI0525 00:10:00.924796 2011 log.go:172] (0xc000592140) (3) Data frame handling\nI0525 00:10:00.925257 2011 log.go:172] (0xc00003a420) Data frame received for 5\nI0525 00:10:00.925271 2011 log.go:172] (0xc0005930e0) (5) Data frame handling\nI0525 00:10:00.927222 2011 log.go:172] (0xc00003a420) Data frame received for 1\nI0525 00:10:00.927281 2011 log.go:172] (0xc0005ff860) (1) Data frame handling\nI0525 00:10:00.927294 2011 log.go:172] (0xc0005ff860) (1) Data frame sent\nI0525 00:10:00.927305 2011 log.go:172] (0xc00003a420) (0xc0005ff860) Stream removed, broadcasting: 1\nI0525 00:10:00.927361 2011 log.go:172] (0xc00003a420) Go away received\nI0525 00:10:00.927536 2011 log.go:172] (0xc00003a420) (0xc0005ff860) Stream removed, broadcasting: 1\nI0525 00:10:00.927554 2011 log.go:172] (0xc00003a420) (0xc000592140) Stream removed, broadcasting: 3\nI0525 00:10:00.927566 2011 log.go:172] (0xc00003a420) (0xc0005930e0) Stream removed, broadcasting: 5\n" May 25 00:10:00.932: INFO: stdout: "affinity-clusterip-timeout-t2stt" May 25 00:10:00.932: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-3838, will wait for the garbage collector to delete the pods May 25 00:10:01.341: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 302.955188ms May 25 00:10:01.841: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 500.335373ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:10:15.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3838" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:50.256 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":79,"skipped":1010,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:10:15.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command May 25 00:10:15.517: INFO: Waiting up to 5m0s for pod "client-containers-08e1a067-a418-49ab-b799-72052c338672" in namespace "containers-6783" to be "Succeeded or Failed" May 25 00:10:15.526: INFO: Pod "client-containers-08e1a067-a418-49ab-b799-72052c338672": Phase="Pending", Reason="", readiness=false. Elapsed: 9.461568ms May 25 00:10:17.530: INFO: Pod "client-containers-08e1a067-a418-49ab-b799-72052c338672": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012694752s May 25 00:10:19.533: INFO: Pod "client-containers-08e1a067-a418-49ab-b799-72052c338672": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016171116s STEP: Saw pod success May 25 00:10:19.533: INFO: Pod "client-containers-08e1a067-a418-49ab-b799-72052c338672" satisfied condition "Succeeded or Failed" May 25 00:10:19.535: INFO: Trying to get logs from node latest-worker2 pod client-containers-08e1a067-a418-49ab-b799-72052c338672 container test-container: STEP: delete the pod May 25 00:10:19.636: INFO: Waiting for pod client-containers-08e1a067-a418-49ab-b799-72052c338672 to disappear May 25 00:10:19.720: INFO: Pod client-containers-08e1a067-a418-49ab-b799-72052c338672 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:10:19.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6783" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":288,"completed":80,"skipped":1021,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:10:19.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 25 00:10:19.799: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b2c73dcb-8904-4589-8c41-6d9c1a9feae5" in namespace "projected-3811" to be "Succeeded or Failed" May 25 00:10:19.802: INFO: Pod "downwardapi-volume-b2c73dcb-8904-4589-8c41-6d9c1a9feae5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.40608ms May 25 00:10:21.851: INFO: Pod "downwardapi-volume-b2c73dcb-8904-4589-8c41-6d9c1a9feae5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052340728s May 25 00:10:23.855: INFO: Pod "downwardapi-volume-b2c73dcb-8904-4589-8c41-6d9c1a9feae5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055874847s STEP: Saw pod success May 25 00:10:23.855: INFO: Pod "downwardapi-volume-b2c73dcb-8904-4589-8c41-6d9c1a9feae5" satisfied condition "Succeeded or Failed" May 25 00:10:23.858: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b2c73dcb-8904-4589-8c41-6d9c1a9feae5 container client-container: STEP: delete the pod May 25 00:10:23.896: INFO: Waiting for pod downwardapi-volume-b2c73dcb-8904-4589-8c41-6d9c1a9feae5 to disappear May 25 00:10:23.903: INFO: Pod downwardapi-volume-b2c73dcb-8904-4589-8c41-6d9c1a9feae5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:10:23.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3811" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":81,"skipped":1059,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:10:23.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:10:29.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3080" for this suite. • [SLOW TEST:5.174 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":288,"completed":82,"skipped":1085,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:10:29.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4432, will wait for the garbage collector to delete the pods May 25 00:10:35.378: INFO: Deleting Job.batch foo took: 6.127282ms May 25 00:10:35.478: INFO: Terminating Job.batch foo pods took: 100.263779ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:11:08.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4432" for this suite. • [SLOW TEST:39.905 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":288,"completed":83,"skipped":1106,"failed":0} SSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:11:08.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container May 25 00:11:13.660: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6030 pod-service-account-746f14fa-5f10-42d1-baba-caa4eca77d1c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 25 00:11:13.908: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6030 pod-service-account-746f14fa-5f10-42d1-baba-caa4eca77d1c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 25 00:11:14.157: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6030 pod-service-account-746f14fa-5f10-42d1-baba-caa4eca77d1c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:11:14.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6030" for this suite. • [SLOW TEST:5.379 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":288,"completed":84,"skipped":1116,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:11:14.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:11:18.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1233" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":288,"completed":85,"skipped":1122,"failed":0} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:11:18.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-5182 STEP: creating a selector STEP: Creating the service pods in kubernetes May 25 00:11:18.713: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 25 00:11:18.781: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 00:11:20.785: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 00:11:22.799: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 00:11:24.786: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 00:11:26.785: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 00:11:28.785: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 00:11:30.785: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 00:11:32.800: INFO: The status of Pod netserver-0 is Running (Ready = true) May 25 00:11:32.807: INFO: The status of Pod netserver-1 is Running (Ready = false) May 25 00:11:34.816: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 25 00:11:38.839: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.116:8080/dial?request=hostname&protocol=http&host=10.244.1.107&port=8080&tries=1'] Namespace:pod-network-test-5182 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:11:38.839: INFO: >>> kubeConfig: /root/.kube/config I0525 00:11:38.876337 7 log.go:172] (0xc002ceb3f0) (0xc000e42b40) Create stream I0525 00:11:38.876372 7 log.go:172] (0xc002ceb3f0) (0xc000e42b40) Stream added, broadcasting: 1 I0525 00:11:38.878600 7 log.go:172] (0xc002ceb3f0) Reply frame received for 1 I0525 00:11:38.878659 7 log.go:172] (0xc002ceb3f0) (0xc00198e1e0) Create stream I0525 00:11:38.878754 7 log.go:172] (0xc002ceb3f0) (0xc00198e1e0) Stream added, broadcasting: 3 I0525 00:11:38.879490 7 log.go:172] (0xc002ceb3f0) Reply frame received for 3 I0525 00:11:38.879513 7 log.go:172] (0xc002ceb3f0) (0xc000e9ab40) Create stream I0525 00:11:38.879597 7 log.go:172] (0xc002ceb3f0) (0xc000e9ab40) Stream added, broadcasting: 5 I0525 00:11:38.880393 7 log.go:172] (0xc002ceb3f0) Reply frame received for 5 I0525 00:11:38.964996 7 log.go:172] (0xc002ceb3f0) Data frame received for 3 I0525 00:11:38.965030 7 log.go:172] (0xc00198e1e0) (3) Data frame handling I0525 00:11:38.965051 7 log.go:172] (0xc00198e1e0) (3) Data frame sent I0525 00:11:38.965822 7 log.go:172] (0xc002ceb3f0) Data frame received for 5 I0525 00:11:38.965925 7 log.go:172] (0xc000e9ab40) (5) Data frame handling I0525 00:11:38.966016 7 log.go:172] (0xc002ceb3f0) Data frame received for 3 I0525 00:11:38.966056 7 log.go:172] (0xc00198e1e0) (3) Data frame handling I0525 00:11:38.967684 7 log.go:172] (0xc002ceb3f0) Data frame received for 1 I0525 00:11:38.967725 7 log.go:172] (0xc000e42b40) (1) Data frame handling I0525 00:11:38.967762 7 log.go:172] (0xc000e42b40) (1) Data frame sent I0525 00:11:38.967805 7 log.go:172] (0xc002ceb3f0) (0xc000e42b40) Stream removed, broadcasting: 1 I0525 00:11:38.967851 7 log.go:172] (0xc002ceb3f0) Go away received I0525 00:11:38.967917 7 log.go:172] (0xc002ceb3f0) (0xc000e42b40) Stream removed, broadcasting: 1 I0525 00:11:38.967950 7 log.go:172] (0xc002ceb3f0) (0xc00198e1e0) Stream removed, broadcasting: 3 I0525 00:11:38.967976 7 log.go:172] (0xc002ceb3f0) (0xc000e9ab40) Stream removed, broadcasting: 5 May 25 00:11:38.968: INFO: Waiting for responses: map[] May 25 00:11:38.970: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.116:8080/dial?request=hostname&protocol=http&host=10.244.2.115&port=8080&tries=1'] Namespace:pod-network-test-5182 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:11:38.970: INFO: >>> kubeConfig: /root/.kube/config I0525 00:11:38.997503 7 log.go:172] (0xc002cebad0) (0xc000e43400) Create stream I0525 00:11:38.997528 7 log.go:172] (0xc002cebad0) (0xc000e43400) Stream added, broadcasting: 1 I0525 00:11:38.999390 7 log.go:172] (0xc002cebad0) Reply frame received for 1 I0525 00:11:38.999437 7 log.go:172] (0xc002cebad0) (0xc000e9abe0) Create stream I0525 00:11:38.999451 7 log.go:172] (0xc002cebad0) (0xc000e9abe0) Stream added, broadcasting: 3 I0525 00:11:39.000384 7 log.go:172] (0xc002cebad0) Reply frame received for 3 I0525 00:11:39.000417 7 log.go:172] (0xc002cebad0) (0xc000e9ac80) Create stream I0525 00:11:39.000429 7 log.go:172] (0xc002cebad0) (0xc000e9ac80) Stream added, broadcasting: 5 I0525 00:11:39.001914 7 log.go:172] (0xc002cebad0) Reply frame received for 5 I0525 00:11:39.059432 7 log.go:172] (0xc002cebad0) Data frame received for 3 I0525 00:11:39.059469 7 log.go:172] (0xc000e9abe0) (3) Data frame handling I0525 00:11:39.059497 7 log.go:172] (0xc000e9abe0) (3) Data frame sent I0525 00:11:39.059568 7 log.go:172] (0xc002cebad0) Data frame received for 3 I0525 00:11:39.059587 7 log.go:172] (0xc000e9abe0) (3) Data frame handling I0525 00:11:39.059652 7 log.go:172] (0xc002cebad0) Data frame received for 5 I0525 00:11:39.059665 7 log.go:172] (0xc000e9ac80) (5) Data frame handling I0525 00:11:39.061526 7 log.go:172] (0xc002cebad0) Data frame received for 1 I0525 00:11:39.061540 7 log.go:172] (0xc000e43400) (1) Data frame handling I0525 00:11:39.061546 7 log.go:172] (0xc000e43400) (1) Data frame sent I0525 00:11:39.061560 7 log.go:172] (0xc002cebad0) (0xc000e43400) Stream removed, broadcasting: 1 I0525 00:11:39.061573 7 log.go:172] (0xc002cebad0) Go away received I0525 00:11:39.061673 7 log.go:172] (0xc002cebad0) (0xc000e43400) Stream removed, broadcasting: 1 I0525 00:11:39.061687 7 log.go:172] (0xc002cebad0) (0xc000e9abe0) Stream removed, broadcasting: 3 I0525 00:11:39.061694 7 log.go:172] (0xc002cebad0) (0xc000e9ac80) Stream removed, broadcasting: 5 May 25 00:11:39.061: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:11:39.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5182" for this suite. • [SLOW TEST:20.466 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":288,"completed":86,"skipped":1126,"failed":0} S ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:11:39.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:11:39.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2570" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":288,"completed":87,"skipped":1127,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:11:39.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-df5b1c21-f609-4d16-b56e-54bd12e23bf3 STEP: Creating a pod to test consume configMaps May 25 00:11:39.364: INFO: Waiting up to 5m0s for pod "pod-configmaps-74d55ad4-3ffc-4ebb-abca-6a762c6ebd5c" in namespace "configmap-7571" to be "Succeeded or Failed" May 25 00:11:39.367: INFO: Pod "pod-configmaps-74d55ad4-3ffc-4ebb-abca-6a762c6ebd5c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.172729ms May 25 00:11:41.371: INFO: Pod "pod-configmaps-74d55ad4-3ffc-4ebb-abca-6a762c6ebd5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007294824s May 25 00:11:43.375: INFO: Pod "pod-configmaps-74d55ad4-3ffc-4ebb-abca-6a762c6ebd5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011337557s STEP: Saw pod success May 25 00:11:43.375: INFO: Pod "pod-configmaps-74d55ad4-3ffc-4ebb-abca-6a762c6ebd5c" satisfied condition "Succeeded or Failed" May 25 00:11:43.378: INFO: Trying to get logs from node latest-worker pod pod-configmaps-74d55ad4-3ffc-4ebb-abca-6a762c6ebd5c container configmap-volume-test: STEP: delete the pod May 25 00:11:43.537: INFO: Waiting for pod pod-configmaps-74d55ad4-3ffc-4ebb-abca-6a762c6ebd5c to disappear May 25 00:11:43.540: INFO: Pod pod-configmaps-74d55ad4-3ffc-4ebb-abca-6a762c6ebd5c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:11:43.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7571" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":88,"skipped":1133,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:11:43.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 25 00:11:43.682: INFO: Waiting up to 5m0s for pod "pod-59a7230d-7e51-46b9-be09-3bc55b9ec632" in namespace "emptydir-6766" to be "Succeeded or Failed" May 25 00:11:43.692: INFO: Pod "pod-59a7230d-7e51-46b9-be09-3bc55b9ec632": Phase="Pending", Reason="", readiness=false. Elapsed: 9.956276ms May 25 00:11:45.696: INFO: Pod "pod-59a7230d-7e51-46b9-be09-3bc55b9ec632": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013946566s May 25 00:11:47.709: INFO: Pod "pod-59a7230d-7e51-46b9-be09-3bc55b9ec632": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027477637s STEP: Saw pod success May 25 00:11:47.709: INFO: Pod "pod-59a7230d-7e51-46b9-be09-3bc55b9ec632" satisfied condition "Succeeded or Failed" May 25 00:11:47.712: INFO: Trying to get logs from node latest-worker pod pod-59a7230d-7e51-46b9-be09-3bc55b9ec632 container test-container: STEP: delete the pod May 25 00:11:47.752: INFO: Waiting for pod pod-59a7230d-7e51-46b9-be09-3bc55b9ec632 to disappear May 25 00:11:47.774: INFO: Pod pod-59a7230d-7e51-46b9-be09-3bc55b9ec632 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:11:47.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6766" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":89,"skipped":1134,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:11:47.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode May 25 00:11:47.938: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-968" to be "Succeeded or Failed" May 25 00:11:48.093: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 155.175763ms May 25 00:11:50.097: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159737353s May 25 00:11:52.102: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164279442s May 25 00:11:54.106: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.168516621s STEP: Saw pod success May 25 00:11:54.106: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 25 00:11:54.110: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 25 00:11:54.148: INFO: Waiting for pod pod-host-path-test to disappear May 25 00:11:54.164: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:11:54.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-968" for this suite. • [SLOW TEST:6.330 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":90,"skipped":1148,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:11:54.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 25 00:11:58.446: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:11:58.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2482" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":91,"skipped":1170,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:11:58.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:12:02.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1711" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":92,"skipped":1209,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:12:02.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-b75b8daf-046a-4ce8-97e8-e211ef239c3c STEP: Creating a pod to test consume configMaps May 25 00:12:02.944: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3e614642-b1b7-492d-887e-4e59e883d16e" in namespace "projected-7997" to be "Succeeded or Failed" May 25 00:12:03.031: INFO: Pod "pod-projected-configmaps-3e614642-b1b7-492d-887e-4e59e883d16e": Phase="Pending", Reason="", readiness=false. Elapsed: 87.050115ms May 25 00:12:05.111: INFO: Pod "pod-projected-configmaps-3e614642-b1b7-492d-887e-4e59e883d16e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166866197s May 25 00:12:07.115: INFO: Pod "pod-projected-configmaps-3e614642-b1b7-492d-887e-4e59e883d16e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.171278742s STEP: Saw pod success May 25 00:12:07.115: INFO: Pod "pod-projected-configmaps-3e614642-b1b7-492d-887e-4e59e883d16e" satisfied condition "Succeeded or Failed" May 25 00:12:07.118: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-3e614642-b1b7-492d-887e-4e59e883d16e container projected-configmap-volume-test: STEP: delete the pod May 25 00:12:07.157: INFO: Waiting for pod pod-projected-configmaps-3e614642-b1b7-492d-887e-4e59e883d16e to disappear May 25 00:12:07.170: INFO: Pod pod-projected-configmaps-3e614642-b1b7-492d-887e-4e59e883d16e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:12:07.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7997" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":93,"skipped":1213,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:12:07.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 25 00:12:11.416: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:12:11.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4997" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":288,"completed":94,"skipped":1215,"failed":0} ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:12:11.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:12:29.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2935" for this suite. • [SLOW TEST:18.071 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":288,"completed":95,"skipped":1215,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:12:29.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 25 00:12:29.709: INFO: Waiting up to 5m0s for pod "downwardapi-volume-96d3c4f8-f16b-4f35-9de5-f50cb4066b67" in namespace "projected-8852" to be "Succeeded or Failed" May 25 00:12:29.716: INFO: Pod "downwardapi-volume-96d3c4f8-f16b-4f35-9de5-f50cb4066b67": Phase="Pending", Reason="", readiness=false. Elapsed: 7.511708ms May 25 00:12:31.720: INFO: Pod "downwardapi-volume-96d3c4f8-f16b-4f35-9de5-f50cb4066b67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011064325s May 25 00:12:33.758: INFO: Pod "downwardapi-volume-96d3c4f8-f16b-4f35-9de5-f50cb4066b67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048979677s STEP: Saw pod success May 25 00:12:33.758: INFO: Pod "downwardapi-volume-96d3c4f8-f16b-4f35-9de5-f50cb4066b67" satisfied condition "Succeeded or Failed" May 25 00:12:33.761: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-96d3c4f8-f16b-4f35-9de5-f50cb4066b67 container client-container: STEP: delete the pod May 25 00:12:33.796: INFO: Waiting for pod downwardapi-volume-96d3c4f8-f16b-4f35-9de5-f50cb4066b67 to disappear May 25 00:12:33.812: INFO: Pod downwardapi-volume-96d3c4f8-f16b-4f35-9de5-f50cb4066b67 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:12:33.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8852" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":288,"completed":96,"skipped":1227,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:12:33.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-48392bff-ffb5-441a-acb9-217cd373cdaf STEP: Creating a pod to test consume configMaps May 25 00:12:34.243: INFO: Waiting up to 5m0s for pod "pod-configmaps-cbd869bf-651e-4d0e-a6a8-482654facd81" in namespace "configmap-8832" to be "Succeeded or Failed" May 25 00:12:34.246: INFO: Pod "pod-configmaps-cbd869bf-651e-4d0e-a6a8-482654facd81": Phase="Pending", Reason="", readiness=false. Elapsed: 3.089433ms May 25 00:12:36.250: INFO: Pod "pod-configmaps-cbd869bf-651e-4d0e-a6a8-482654facd81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007801603s May 25 00:12:38.255: INFO: Pod "pod-configmaps-cbd869bf-651e-4d0e-a6a8-482654facd81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011948852s STEP: Saw pod success May 25 00:12:38.255: INFO: Pod "pod-configmaps-cbd869bf-651e-4d0e-a6a8-482654facd81" satisfied condition "Succeeded or Failed" May 25 00:12:38.257: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-cbd869bf-651e-4d0e-a6a8-482654facd81 container configmap-volume-test: STEP: delete the pod May 25 00:12:38.286: INFO: Waiting for pod pod-configmaps-cbd869bf-651e-4d0e-a6a8-482654facd81 to disappear May 25 00:12:38.293: INFO: Pod pod-configmaps-cbd869bf-651e-4d0e-a6a8-482654facd81 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:12:38.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8832" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":97,"skipped":1228,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:12:38.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 00:12:39.531: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 00:12:41.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725962359, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725962359, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725962359, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725962359, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 00:12:44.580: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:12:44.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5436" for this suite. STEP: Destroying namespace "webhook-5436-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.361 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":288,"completed":98,"skipped":1244,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:12:44.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-5hdz STEP: Creating a pod to test atomic-volume-subpath May 25 00:12:44.813: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5hdz" in namespace "subpath-4488" to be "Succeeded or Failed" May 25 00:12:44.819: INFO: Pod "pod-subpath-test-configmap-5hdz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04971ms May 25 00:12:46.824: INFO: Pod "pod-subpath-test-configmap-5hdz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010916778s May 25 00:12:48.828: INFO: Pod "pod-subpath-test-configmap-5hdz": Phase="Running", Reason="", readiness=true. Elapsed: 4.015067085s May 25 00:12:50.833: INFO: Pod "pod-subpath-test-configmap-5hdz": Phase="Running", Reason="", readiness=true. Elapsed: 6.019991311s May 25 00:12:52.949: INFO: Pod "pod-subpath-test-configmap-5hdz": Phase="Running", Reason="", readiness=true. Elapsed: 8.136171053s May 25 00:12:54.952: INFO: Pod "pod-subpath-test-configmap-5hdz": Phase="Running", Reason="", readiness=true. Elapsed: 10.139131461s May 25 00:12:56.956: INFO: Pod "pod-subpath-test-configmap-5hdz": Phase="Running", Reason="", readiness=true. Elapsed: 12.142816083s May 25 00:12:58.960: INFO: Pod "pod-subpath-test-configmap-5hdz": Phase="Running", Reason="", readiness=true. Elapsed: 14.146788386s May 25 00:13:00.965: INFO: Pod "pod-subpath-test-configmap-5hdz": Phase="Running", Reason="", readiness=true. Elapsed: 16.15175957s May 25 00:13:02.969: INFO: Pod "pod-subpath-test-configmap-5hdz": Phase="Running", Reason="", readiness=true. Elapsed: 18.156037196s May 25 00:13:04.974: INFO: Pod "pod-subpath-test-configmap-5hdz": Phase="Running", Reason="", readiness=true. Elapsed: 20.16042273s May 25 00:13:06.977: INFO: Pod "pod-subpath-test-configmap-5hdz": Phase="Running", Reason="", readiness=true. Elapsed: 22.164256118s May 25 00:13:08.982: INFO: Pod "pod-subpath-test-configmap-5hdz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.168720577s STEP: Saw pod success May 25 00:13:08.982: INFO: Pod "pod-subpath-test-configmap-5hdz" satisfied condition "Succeeded or Failed" May 25 00:13:08.985: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-5hdz container test-container-subpath-configmap-5hdz: STEP: delete the pod May 25 00:13:09.024: INFO: Waiting for pod pod-subpath-test-configmap-5hdz to disappear May 25 00:13:09.031: INFO: Pod pod-subpath-test-configmap-5hdz no longer exists STEP: Deleting pod pod-subpath-test-configmap-5hdz May 25 00:13:09.031: INFO: Deleting pod "pod-subpath-test-configmap-5hdz" in namespace "subpath-4488" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:13:09.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4488" for this suite. • [SLOW TEST:24.343 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":288,"completed":99,"skipped":1250,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:13:09.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6200 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 25 00:13:09.154: INFO: Found 0 stateful pods, waiting for 3 May 25 00:13:19.160: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 25 00:13:19.160: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 25 00:13:19.160: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 25 00:13:29.160: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 25 00:13:29.160: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 25 00:13:29.160: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 25 00:13:29.190: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 25 00:13:39.270: INFO: Updating stateful set ss2 May 25 00:13:39.399: INFO: Waiting for Pod statefulset-6200/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 25 00:13:50.747: INFO: Found 2 stateful pods, waiting for 3 May 25 00:14:00.752: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 25 00:14:00.752: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 25 00:14:00.752: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 25 00:14:00.777: INFO: Updating stateful set ss2 May 25 00:14:00.826: INFO: Waiting for Pod statefulset-6200/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 25 00:14:10.851: INFO: Updating stateful set ss2 May 25 00:14:10.913: INFO: Waiting for StatefulSet statefulset-6200/ss2 to complete update May 25 00:14:10.913: INFO: Waiting for Pod statefulset-6200/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 25 00:14:20.922: INFO: Deleting all statefulset in ns statefulset-6200 May 25 00:14:20.924: INFO: Scaling statefulset ss2 to 0 May 25 00:14:40.972: INFO: Waiting for statefulset status.replicas updated to 0 May 25 00:14:40.975: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:14:40.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6200" for this suite. • [SLOW TEST:91.954 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":288,"completed":100,"skipped":1257,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:14:40.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 25 00:14:41.080: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9206' May 25 00:14:44.087: INFO: stderr: "" May 25 00:14:44.087: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 May 25 00:14:44.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9206' May 25 00:14:54.866: INFO: stderr: "" May 25 00:14:54.866: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:14:54.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9206" for this suite. • [SLOW TEST:13.874 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":288,"completed":101,"skipped":1261,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:14:54.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:14:54.994: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 25 00:14:58.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-474 create -f -' May 25 00:15:01.399: INFO: stderr: "" May 25 00:15:01.399: INFO: stdout: "e2e-test-crd-publish-openapi-5294-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 25 00:15:01.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-474 delete e2e-test-crd-publish-openapi-5294-crds test-foo' May 25 00:15:01.537: INFO: stderr: "" May 25 00:15:01.537: INFO: stdout: "e2e-test-crd-publish-openapi-5294-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 25 00:15:01.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-474 apply -f -' May 25 00:15:01.806: INFO: stderr: "" May 25 00:15:01.807: INFO: stdout: "e2e-test-crd-publish-openapi-5294-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 25 00:15:01.807: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-474 delete e2e-test-crd-publish-openapi-5294-crds test-foo' May 25 00:15:01.915: INFO: stderr: "" May 25 00:15:01.915: INFO: stdout: "e2e-test-crd-publish-openapi-5294-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 25 00:15:01.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-474 create -f -' May 25 00:15:02.169: INFO: rc: 1 May 25 00:15:02.169: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-474 apply -f -' May 25 00:15:02.424: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 25 00:15:02.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-474 create -f -' May 25 00:15:02.667: INFO: rc: 1 May 25 00:15:02.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-474 apply -f -' May 25 00:15:02.905: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 25 00:15:02.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5294-crds' May 25 00:15:03.162: INFO: stderr: "" May 25 00:15:03.162: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5294-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 25 00:15:03.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5294-crds.metadata' May 25 00:15:03.425: INFO: stderr: "" May 25 00:15:03.425: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5294-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 25 00:15:03.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5294-crds.spec' May 25 00:15:03.643: INFO: stderr: "" May 25 00:15:03.643: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5294-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 25 00:15:03.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5294-crds.spec.bars' May 25 00:15:03.883: INFO: stderr: "" May 25 00:15:03.883: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5294-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 25 00:15:03.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5294-crds.spec.bars2' May 25 00:15:04.136: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:15:07.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-474" for this suite. • [SLOW TEST:12.203 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":288,"completed":102,"skipped":1262,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:15:07.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7280.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7280.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7280.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7280.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7280.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7280.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 00:15:13.238: INFO: DNS probes using dns-7280/dns-test-dcfaad0c-415e-425d-9863-c4918fc90e24 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:15:13.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7280" for this suite. • [SLOW TEST:6.255 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":288,"completed":103,"skipped":1294,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:15:13.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-0681571b-d93d-43d0-a6d1-cba02901b142 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:15:13.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-475" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":288,"completed":104,"skipped":1297,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:15:13.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 25 00:15:13.953: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6586 /api/v1/namespaces/watch-6586/configmaps/e2e-watch-test-watch-closed 3fe5809d-0d26-4bf5-b5ee-36bb0ab51105 7416625 0 2020-05-25 00:15:13 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-25 00:15:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 25 00:15:13.954: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6586 /api/v1/namespaces/watch-6586/configmaps/e2e-watch-test-watch-closed 3fe5809d-0d26-4bf5-b5ee-36bb0ab51105 7416626 0 2020-05-25 00:15:13 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-25 00:15:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 25 00:15:14.000: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6586 /api/v1/namespaces/watch-6586/configmaps/e2e-watch-test-watch-closed 3fe5809d-0d26-4bf5-b5ee-36bb0ab51105 7416628 0 2020-05-25 00:15:13 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-25 00:15:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 25 00:15:14.000: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6586 /api/v1/namespaces/watch-6586/configmaps/e2e-watch-test-watch-closed 3fe5809d-0d26-4bf5-b5ee-36bb0ab51105 7416630 0 2020-05-25 00:15:13 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-25 00:15:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:15:14.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6586" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":288,"completed":105,"skipped":1326,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:15:14.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-f574dccd-eea3-420f-8c3c-baa05b8a0471 STEP: Creating a pod to test consume secrets May 25 00:15:14.269: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6290fb64-e181-4e06-b249-2a43412f3b72" in namespace "projected-1810" to be "Succeeded or Failed" May 25 00:15:14.328: INFO: Pod "pod-projected-secrets-6290fb64-e181-4e06-b249-2a43412f3b72": Phase="Pending", Reason="", readiness=false. Elapsed: 58.159441ms May 25 00:15:16.510: INFO: Pod "pod-projected-secrets-6290fb64-e181-4e06-b249-2a43412f3b72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240771046s May 25 00:15:18.523: INFO: Pod "pod-projected-secrets-6290fb64-e181-4e06-b249-2a43412f3b72": Phase="Running", Reason="", readiness=true. Elapsed: 4.253039902s May 25 00:15:20.527: INFO: Pod "pod-projected-secrets-6290fb64-e181-4e06-b249-2a43412f3b72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.257809207s STEP: Saw pod success May 25 00:15:20.527: INFO: Pod "pod-projected-secrets-6290fb64-e181-4e06-b249-2a43412f3b72" satisfied condition "Succeeded or Failed" May 25 00:15:20.530: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-6290fb64-e181-4e06-b249-2a43412f3b72 container projected-secret-volume-test: STEP: delete the pod May 25 00:15:20.583: INFO: Waiting for pod pod-projected-secrets-6290fb64-e181-4e06-b249-2a43412f3b72 to disappear May 25 00:15:20.594: INFO: Pod pod-projected-secrets-6290fb64-e181-4e06-b249-2a43412f3b72 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:15:20.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1810" for this suite. • [SLOW TEST:6.563 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":106,"skipped":1341,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:15:20.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:15:20.681: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 25 00:15:23.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7016 create -f -' May 25 00:15:27.037: INFO: stderr: "" May 25 00:15:27.037: INFO: stdout: "e2e-test-crd-publish-openapi-8488-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 25 00:15:27.037: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7016 delete e2e-test-crd-publish-openapi-8488-crds test-cr' May 25 00:15:27.157: INFO: stderr: "" May 25 00:15:27.157: INFO: stdout: "e2e-test-crd-publish-openapi-8488-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 25 00:15:27.157: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7016 apply -f -' May 25 00:15:27.438: INFO: stderr: "" May 25 00:15:27.438: INFO: stdout: "e2e-test-crd-publish-openapi-8488-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 25 00:15:27.438: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7016 delete e2e-test-crd-publish-openapi-8488-crds test-cr' May 25 00:15:27.528: INFO: stderr: "" May 25 00:15:27.528: INFO: stdout: "e2e-test-crd-publish-openapi-8488-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 25 00:15:27.528: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8488-crds' May 25 00:15:27.751: INFO: stderr: "" May 25 00:15:27.751: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8488-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:15:30.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7016" for this suite. • [SLOW TEST:10.070 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":288,"completed":107,"skipped":1382,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:15:30.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 25 00:15:30.774: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9635' May 25 00:15:30.872: INFO: stderr: "" May 25 00:15:30.872: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 25 00:15:35.922: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9635 -o json' May 25 00:15:36.036: INFO: stderr: "" May 25 00:15:36.036: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-25T00:15:30Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-25T00:15:30Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.123\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-25T00:15:33Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9635\",\n \"resourceVersion\": \"7416765\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9635/pods/e2e-test-httpd-pod\",\n \"uid\": \"441fcdee-d24a-4afc-9ad8-aba193cf302a\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-xk9sv\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-xk9sv\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-xk9sv\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-25T00:15:30Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-25T00:15:33Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-25T00:15:33Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-25T00:15:30Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://5747c5e13f8274aff249f2fb9de88ca07b299e01d816fe772b7f8298d948af74\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-25T00:15:33Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.123\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.123\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-25T00:15:30Z\"\n }\n}\n" STEP: replace the image in the pod May 25 00:15:36.037: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9635' May 25 00:15:36.366: INFO: stderr: "" May 25 00:15:36.366: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 May 25 00:15:36.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9635' May 25 00:15:44.893: INFO: stderr: "" May 25 00:15:44.893: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:15:44.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9635" for this suite. • [SLOW TEST:14.231 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":288,"completed":108,"skipped":1390,"failed":0} S ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:15:44.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-473e9120-a135-4231-b794-397ea9b1373c STEP: Creating secret with name secret-projected-all-test-volume-94a0718f-b139-43b0-ace0-16ded639416b STEP: Creating a pod to test Check all projections for projected volume plugin May 25 00:15:45.026: INFO: Waiting up to 5m0s for pod "projected-volume-e657520a-9cd3-4678-99d3-f6264cf63438" in namespace "projected-9594" to be "Succeeded or Failed" May 25 00:15:45.044: INFO: Pod "projected-volume-e657520a-9cd3-4678-99d3-f6264cf63438": Phase="Pending", Reason="", readiness=false. Elapsed: 18.035427ms May 25 00:15:47.049: INFO: Pod "projected-volume-e657520a-9cd3-4678-99d3-f6264cf63438": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022826149s May 25 00:15:49.053: INFO: Pod "projected-volume-e657520a-9cd3-4678-99d3-f6264cf63438": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027309133s STEP: Saw pod success May 25 00:15:49.053: INFO: Pod "projected-volume-e657520a-9cd3-4678-99d3-f6264cf63438" satisfied condition "Succeeded or Failed" May 25 00:15:49.056: INFO: Trying to get logs from node latest-worker2 pod projected-volume-e657520a-9cd3-4678-99d3-f6264cf63438 container projected-all-volume-test: STEP: delete the pod May 25 00:15:49.091: INFO: Waiting for pod projected-volume-e657520a-9cd3-4678-99d3-f6264cf63438 to disappear May 25 00:15:49.098: INFO: Pod projected-volume-e657520a-9cd3-4678-99d3-f6264cf63438 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:15:49.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9594" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":288,"completed":109,"skipped":1391,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:15:49.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:15:49.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2409" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":288,"completed":110,"skipped":1413,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:15:49.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 00:15:50.356: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 00:15:52.483: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725962550, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725962550, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725962550, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725962550, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 00:15:55.513: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 25 00:15:55.549: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:15:55.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1604" for this suite. STEP: Destroying namespace "webhook-1604-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.493 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":288,"completed":111,"skipped":1463,"failed":0} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:15:55.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-8916 STEP: creating a selector STEP: Creating the service pods in kubernetes May 25 00:15:55.803: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 25 00:15:55.915: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 00:15:58.035: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 00:15:59.920: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 00:16:01.933: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 00:16:03.926: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 00:16:05.927: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 00:16:07.919: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 00:16:09.992: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 00:16:11.921: INFO: The status of Pod netserver-0 is Running (Ready = true) May 25 00:16:11.928: INFO: The status of Pod netserver-1 is Running (Ready = false) May 25 00:16:13.932: INFO: The status of Pod netserver-1 is Running (Ready = false) May 25 00:16:15.932: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 25 00:16:20.032: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.124 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8916 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:16:20.033: INFO: >>> kubeConfig: /root/.kube/config I0525 00:16:20.066799 7 log.go:172] (0xc0012e6840) (0xc000e437c0) Create stream I0525 00:16:20.066843 7 log.go:172] (0xc0012e6840) (0xc000e437c0) Stream added, broadcasting: 1 I0525 00:16:20.068537 7 log.go:172] (0xc0012e6840) Reply frame received for 1 I0525 00:16:20.068585 7 log.go:172] (0xc0012e6840) (0xc000e43900) Create stream I0525 00:16:20.068606 7 log.go:172] (0xc0012e6840) (0xc000e43900) Stream added, broadcasting: 3 I0525 00:16:20.069791 7 log.go:172] (0xc0012e6840) Reply frame received for 3 I0525 00:16:20.069858 7 log.go:172] (0xc0012e6840) (0xc000e9a280) Create stream I0525 00:16:20.069882 7 log.go:172] (0xc0012e6840) (0xc000e9a280) Stream added, broadcasting: 5 I0525 00:16:20.070760 7 log.go:172] (0xc0012e6840) Reply frame received for 5 I0525 00:16:21.155079 7 log.go:172] (0xc0012e6840) Data frame received for 3 I0525 00:16:21.155125 7 log.go:172] (0xc000e43900) (3) Data frame handling I0525 00:16:21.155249 7 log.go:172] (0xc000e43900) (3) Data frame sent I0525 00:16:21.155422 7 log.go:172] (0xc0012e6840) Data frame received for 5 I0525 00:16:21.155462 7 log.go:172] (0xc000e9a280) (5) Data frame handling I0525 00:16:21.155931 7 log.go:172] (0xc0012e6840) Data frame received for 3 I0525 00:16:21.155959 7 log.go:172] (0xc000e43900) (3) Data frame handling I0525 00:16:21.158253 7 log.go:172] (0xc0012e6840) Data frame received for 1 I0525 00:16:21.158302 7 log.go:172] (0xc000e437c0) (1) Data frame handling I0525 00:16:21.158319 7 log.go:172] (0xc000e437c0) (1) Data frame sent I0525 00:16:21.158344 7 log.go:172] (0xc0012e6840) (0xc000e437c0) Stream removed, broadcasting: 1 I0525 00:16:21.158361 7 log.go:172] (0xc0012e6840) Go away received I0525 00:16:21.158597 7 log.go:172] (0xc0012e6840) (0xc000e437c0) Stream removed, broadcasting: 1 I0525 00:16:21.158636 7 log.go:172] (0xc0012e6840) (0xc000e43900) Stream removed, broadcasting: 3 I0525 00:16:21.158661 7 log.go:172] (0xc0012e6840) (0xc000e9a280) Stream removed, broadcasting: 5 May 25 00:16:21.158: INFO: Found all expected endpoints: [netserver-0] May 25 00:16:21.162: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.131 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8916 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:16:21.162: INFO: >>> kubeConfig: /root/.kube/config I0525 00:16:21.193869 7 log.go:172] (0xc005674630) (0xc000e9a6e0) Create stream I0525 00:16:21.193902 7 log.go:172] (0xc005674630) (0xc000e9a6e0) Stream added, broadcasting: 1 I0525 00:16:21.195897 7 log.go:172] (0xc005674630) Reply frame received for 1 I0525 00:16:21.195950 7 log.go:172] (0xc005674630) (0xc001901860) Create stream I0525 00:16:21.195962 7 log.go:172] (0xc005674630) (0xc001901860) Stream added, broadcasting: 3 I0525 00:16:21.196820 7 log.go:172] (0xc005674630) Reply frame received for 3 I0525 00:16:21.196857 7 log.go:172] (0xc005674630) (0xc00134fae0) Create stream I0525 00:16:21.196870 7 log.go:172] (0xc005674630) (0xc00134fae0) Stream added, broadcasting: 5 I0525 00:16:21.198211 7 log.go:172] (0xc005674630) Reply frame received for 5 I0525 00:16:22.282114 7 log.go:172] (0xc005674630) Data frame received for 5 I0525 00:16:22.282172 7 log.go:172] (0xc00134fae0) (5) Data frame handling I0525 00:16:22.282215 7 log.go:172] (0xc005674630) Data frame received for 3 I0525 00:16:22.282253 7 log.go:172] (0xc001901860) (3) Data frame handling I0525 00:16:22.282273 7 log.go:172] (0xc001901860) (3) Data frame sent I0525 00:16:22.282405 7 log.go:172] (0xc005674630) Data frame received for 3 I0525 00:16:22.282449 7 log.go:172] (0xc001901860) (3) Data frame handling I0525 00:16:22.284625 7 log.go:172] (0xc005674630) Data frame received for 1 I0525 00:16:22.284657 7 log.go:172] (0xc000e9a6e0) (1) Data frame handling I0525 00:16:22.284680 7 log.go:172] (0xc000e9a6e0) (1) Data frame sent I0525 00:16:22.284704 7 log.go:172] (0xc005674630) (0xc000e9a6e0) Stream removed, broadcasting: 1 I0525 00:16:22.284741 7 log.go:172] (0xc005674630) Go away received I0525 00:16:22.284956 7 log.go:172] (0xc005674630) (0xc000e9a6e0) Stream removed, broadcasting: 1 I0525 00:16:22.284993 7 log.go:172] (0xc005674630) (0xc001901860) Stream removed, broadcasting: 3 I0525 00:16:22.285019 7 log.go:172] (0xc005674630) (0xc00134fae0) Stream removed, broadcasting: 5 May 25 00:16:22.285: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:16:22.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8916" for this suite. • [SLOW TEST:26.539 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":112,"skipped":1467,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:16:22.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 25 00:16:26.455: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3668 PodName:pod-sharedvolume-00314b71-e685-4d5a-9cff-dbafc1bc3f1a ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:16:26.455: INFO: >>> kubeConfig: /root/.kube/config I0525 00:16:26.482842 7 log.go:172] (0xc00635af20) (0xc0016e8a00) Create stream I0525 00:16:26.482871 7 log.go:172] (0xc00635af20) (0xc0016e8a00) Stream added, broadcasting: 1 I0525 00:16:26.485063 7 log.go:172] (0xc00635af20) Reply frame received for 1 I0525 00:16:26.485318 7 log.go:172] (0xc00635af20) (0xc000ccbd60) Create stream I0525 00:16:26.485337 7 log.go:172] (0xc00635af20) (0xc000ccbd60) Stream added, broadcasting: 3 I0525 00:16:26.486291 7 log.go:172] (0xc00635af20) Reply frame received for 3 I0525 00:16:26.486334 7 log.go:172] (0xc00635af20) (0xc000e439a0) Create stream I0525 00:16:26.486353 7 log.go:172] (0xc00635af20) (0xc000e439a0) Stream added, broadcasting: 5 I0525 00:16:26.487384 7 log.go:172] (0xc00635af20) Reply frame received for 5 I0525 00:16:26.575341 7 log.go:172] (0xc00635af20) Data frame received for 5 I0525 00:16:26.575373 7 log.go:172] (0xc000e439a0) (5) Data frame handling I0525 00:16:26.575399 7 log.go:172] (0xc00635af20) Data frame received for 3 I0525 00:16:26.575424 7 log.go:172] (0xc000ccbd60) (3) Data frame handling I0525 00:16:26.575443 7 log.go:172] (0xc000ccbd60) (3) Data frame sent I0525 00:16:26.575467 7 log.go:172] (0xc00635af20) Data frame received for 3 I0525 00:16:26.575498 7 log.go:172] (0xc000ccbd60) (3) Data frame handling I0525 00:16:26.576946 7 log.go:172] (0xc00635af20) Data frame received for 1 I0525 00:16:26.576965 7 log.go:172] (0xc0016e8a00) (1) Data frame handling I0525 00:16:26.576979 7 log.go:172] (0xc0016e8a00) (1) Data frame sent I0525 00:16:26.576998 7 log.go:172] (0xc00635af20) (0xc0016e8a00) Stream removed, broadcasting: 1 I0525 00:16:26.577025 7 log.go:172] (0xc00635af20) Go away received I0525 00:16:26.577298 7 log.go:172] (0xc00635af20) (0xc0016e8a00) Stream removed, broadcasting: 1 I0525 00:16:26.577331 7 log.go:172] (0xc00635af20) (0xc000ccbd60) Stream removed, broadcasting: 3 I0525 00:16:26.577340 7 log.go:172] (0xc00635af20) (0xc000e439a0) Stream removed, broadcasting: 5 May 25 00:16:26.577: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:16:26.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3668" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":288,"completed":113,"skipped":1468,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:16:26.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 25 00:16:26.744: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e0778777-c992-43d2-b5d9-15a98c4044df" in namespace "downward-api-142" to be "Succeeded or Failed" May 25 00:16:26.787: INFO: Pod "downwardapi-volume-e0778777-c992-43d2-b5d9-15a98c4044df": Phase="Pending", Reason="", readiness=false. Elapsed: 42.884535ms May 25 00:16:29.251: INFO: Pod "downwardapi-volume-e0778777-c992-43d2-b5d9-15a98c4044df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.506167908s May 25 00:16:31.255: INFO: Pod "downwardapi-volume-e0778777-c992-43d2-b5d9-15a98c4044df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.510829351s STEP: Saw pod success May 25 00:16:31.255: INFO: Pod "downwardapi-volume-e0778777-c992-43d2-b5d9-15a98c4044df" satisfied condition "Succeeded or Failed" May 25 00:16:31.259: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e0778777-c992-43d2-b5d9-15a98c4044df container client-container: STEP: delete the pod May 25 00:16:31.336: INFO: Waiting for pod downwardapi-volume-e0778777-c992-43d2-b5d9-15a98c4044df to disappear May 25 00:16:31.342: INFO: Pod downwardapi-volume-e0778777-c992-43d2-b5d9-15a98c4044df no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:16:31.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-142" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":114,"skipped":1485,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:16:31.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command May 25 00:16:31.412: INFO: Waiting up to 5m0s for pod "var-expansion-36f1920a-d3b9-4f28-9088-49eeb89f3e83" in namespace "var-expansion-241" to be "Succeeded or Failed" May 25 00:16:31.427: INFO: Pod "var-expansion-36f1920a-d3b9-4f28-9088-49eeb89f3e83": Phase="Pending", Reason="", readiness=false. Elapsed: 14.401965ms May 25 00:16:33.431: INFO: Pod "var-expansion-36f1920a-d3b9-4f28-9088-49eeb89f3e83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01862474s May 25 00:16:35.435: INFO: Pod "var-expansion-36f1920a-d3b9-4f28-9088-49eeb89f3e83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022630096s STEP: Saw pod success May 25 00:16:35.435: INFO: Pod "var-expansion-36f1920a-d3b9-4f28-9088-49eeb89f3e83" satisfied condition "Succeeded or Failed" May 25 00:16:35.438: INFO: Trying to get logs from node latest-worker pod var-expansion-36f1920a-d3b9-4f28-9088-49eeb89f3e83 container dapi-container: STEP: delete the pod May 25 00:16:35.464: INFO: Waiting for pod var-expansion-36f1920a-d3b9-4f28-9088-49eeb89f3e83 to disappear May 25 00:16:35.483: INFO: Pod var-expansion-36f1920a-d3b9-4f28-9088-49eeb89f3e83 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:16:35.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-241" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":288,"completed":115,"skipped":1486,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:16:35.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 25 00:16:35.616: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4819 /api/v1/namespaces/watch-4819/configmaps/e2e-watch-test-configmap-a e25d934a-23c8-499f-aeb8-e3d7f6857aa0 7417197 0 2020-05-25 00:16:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-25 00:16:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 25 00:16:35.616: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4819 /api/v1/namespaces/watch-4819/configmaps/e2e-watch-test-configmap-a e25d934a-23c8-499f-aeb8-e3d7f6857aa0 7417197 0 2020-05-25 00:16:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-25 00:16:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 25 00:16:45.625: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4819 /api/v1/namespaces/watch-4819/configmaps/e2e-watch-test-configmap-a e25d934a-23c8-499f-aeb8-e3d7f6857aa0 7417244 0 2020-05-25 00:16:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-25 00:16:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 25 00:16:45.626: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4819 /api/v1/namespaces/watch-4819/configmaps/e2e-watch-test-configmap-a e25d934a-23c8-499f-aeb8-e3d7f6857aa0 7417244 0 2020-05-25 00:16:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-25 00:16:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 25 00:16:55.634: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4819 /api/v1/namespaces/watch-4819/configmaps/e2e-watch-test-configmap-a e25d934a-23c8-499f-aeb8-e3d7f6857aa0 7417274 0 2020-05-25 00:16:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-25 00:16:55 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 25 00:16:55.635: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4819 /api/v1/namespaces/watch-4819/configmaps/e2e-watch-test-configmap-a e25d934a-23c8-499f-aeb8-e3d7f6857aa0 7417274 0 2020-05-25 00:16:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-25 00:16:55 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 25 00:17:05.642: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4819 /api/v1/namespaces/watch-4819/configmaps/e2e-watch-test-configmap-a e25d934a-23c8-499f-aeb8-e3d7f6857aa0 7417304 0 2020-05-25 00:16:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-25 00:16:55 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 25 00:17:05.642: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4819 /api/v1/namespaces/watch-4819/configmaps/e2e-watch-test-configmap-a e25d934a-23c8-499f-aeb8-e3d7f6857aa0 7417304 0 2020-05-25 00:16:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-25 00:16:55 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 25 00:17:15.649: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4819 /api/v1/namespaces/watch-4819/configmaps/e2e-watch-test-configmap-b 6e2cd778-d30d-45ad-a9c1-bbd749d3ecf4 7417334 0 2020-05-25 00:17:15 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-25 00:17:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 25 00:17:15.650: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4819 /api/v1/namespaces/watch-4819/configmaps/e2e-watch-test-configmap-b 6e2cd778-d30d-45ad-a9c1-bbd749d3ecf4 7417334 0 2020-05-25 00:17:15 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-25 00:17:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 25 00:17:25.657: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4819 /api/v1/namespaces/watch-4819/configmaps/e2e-watch-test-configmap-b 6e2cd778-d30d-45ad-a9c1-bbd749d3ecf4 7417362 0 2020-05-25 00:17:15 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-25 00:17:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 25 00:17:25.657: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4819 /api/v1/namespaces/watch-4819/configmaps/e2e-watch-test-configmap-b 6e2cd778-d30d-45ad-a9c1-bbd749d3ecf4 7417362 0 2020-05-25 00:17:15 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-25 00:17:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:17:35.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4819" for this suite. • [SLOW TEST:60.177 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":288,"completed":116,"skipped":1509,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:17:35.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:17:35.744: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:17:36.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-726" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":288,"completed":117,"skipped":1544,"failed":0} ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:17:36.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3356.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3356.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3356.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3356.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3356.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3356.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3356.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3356.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3356.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3356.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3356.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 97.140.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.140.97_udp@PTR;check="$$(dig +tcp +noall +answer +search 97.140.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.140.97_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3356.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3356.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3356.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3356.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3356.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3356.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3356.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3356.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3356.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3356.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3356.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 97.140.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.140.97_udp@PTR;check="$$(dig +tcp +noall +answer +search 97.140.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.140.97_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 00:17:43.134: INFO: Unable to read wheezy_udp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:43.137: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:43.140: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:43.143: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:43.166: INFO: Unable to read jessie_udp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:43.183: INFO: Unable to read jessie_tcp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:43.186: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:43.189: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:43.207: INFO: Lookups using dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5 failed for: [wheezy_udp@dns-test-service.dns-3356.svc.cluster.local wheezy_tcp@dns-test-service.dns-3356.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local jessie_udp@dns-test-service.dns-3356.svc.cluster.local jessie_tcp@dns-test-service.dns-3356.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local] May 25 00:17:48.212: INFO: Unable to read wheezy_udp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:48.216: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:48.219: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:48.221: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:48.275: INFO: Unable to read jessie_udp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:48.277: INFO: Unable to read jessie_tcp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:48.279: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:48.281: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:48.294: INFO: Lookups using dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5 failed for: [wheezy_udp@dns-test-service.dns-3356.svc.cluster.local wheezy_tcp@dns-test-service.dns-3356.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local jessie_udp@dns-test-service.dns-3356.svc.cluster.local jessie_tcp@dns-test-service.dns-3356.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local] May 25 00:17:53.230: INFO: Unable to read wheezy_udp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:53.234: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:53.238: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:53.242: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:53.264: INFO: Unable to read jessie_udp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:53.267: INFO: Unable to read jessie_tcp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:53.269: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:53.272: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:53.288: INFO: Lookups using dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5 failed for: [wheezy_udp@dns-test-service.dns-3356.svc.cluster.local wheezy_tcp@dns-test-service.dns-3356.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local jessie_udp@dns-test-service.dns-3356.svc.cluster.local jessie_tcp@dns-test-service.dns-3356.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local] May 25 00:17:58.213: INFO: Unable to read wheezy_udp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:58.216: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:58.219: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:58.222: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:58.241: INFO: Unable to read jessie_udp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:58.244: INFO: Unable to read jessie_tcp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:58.247: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:58.250: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:17:58.269: INFO: Lookups using dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5 failed for: [wheezy_udp@dns-test-service.dns-3356.svc.cluster.local wheezy_tcp@dns-test-service.dns-3356.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local jessie_udp@dns-test-service.dns-3356.svc.cluster.local jessie_tcp@dns-test-service.dns-3356.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local] May 25 00:18:03.212: INFO: Unable to read wheezy_udp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:18:03.221: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:18:03.224: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:18:03.226: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:18:03.242: INFO: Unable to read jessie_udp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:18:03.244: INFO: Unable to read jessie_tcp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:18:03.246: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:18:03.249: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:18:03.298: INFO: Lookups using dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5 failed for: [wheezy_udp@dns-test-service.dns-3356.svc.cluster.local wheezy_tcp@dns-test-service.dns-3356.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local jessie_udp@dns-test-service.dns-3356.svc.cluster.local jessie_tcp@dns-test-service.dns-3356.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local] May 25 00:18:08.212: INFO: Unable to read wheezy_udp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:18:08.214: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:18:08.216: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:18:08.219: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:18:08.238: INFO: Unable to read jessie_udp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:18:08.241: INFO: Unable to read jessie_tcp@dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:18:08.243: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:18:08.246: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local from pod dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5: the server could not find the requested resource (get pods dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5) May 25 00:18:08.265: INFO: Lookups using dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5 failed for: [wheezy_udp@dns-test-service.dns-3356.svc.cluster.local wheezy_tcp@dns-test-service.dns-3356.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local jessie_udp@dns-test-service.dns-3356.svc.cluster.local jessie_tcp@dns-test-service.dns-3356.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3356.svc.cluster.local] May 25 00:18:13.273: INFO: DNS probes using dns-3356/dns-test-9ae5f609-d027-411b-8c13-06848bde2cc5 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:18:14.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3356" for this suite. • [SLOW TEST:37.344 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":288,"completed":118,"skipped":1544,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:18:14.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 25 00:18:14.288: INFO: Waiting up to 5m0s for pod "pod-c4364c44-e378-49d2-a065-5b15744144b0" in namespace "emptydir-581" to be "Succeeded or Failed" May 25 00:18:14.291: INFO: Pod "pod-c4364c44-e378-49d2-a065-5b15744144b0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.065244ms May 25 00:18:16.294: INFO: Pod "pod-c4364c44-e378-49d2-a065-5b15744144b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005987622s May 25 00:18:18.298: INFO: Pod "pod-c4364c44-e378-49d2-a065-5b15744144b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010490153s May 25 00:18:20.323: INFO: Pod "pod-c4364c44-e378-49d2-a065-5b15744144b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034855829s STEP: Saw pod success May 25 00:18:20.323: INFO: Pod "pod-c4364c44-e378-49d2-a065-5b15744144b0" satisfied condition "Succeeded or Failed" May 25 00:18:20.325: INFO: Trying to get logs from node latest-worker2 pod pod-c4364c44-e378-49d2-a065-5b15744144b0 container test-container: STEP: delete the pod May 25 00:18:20.373: INFO: Waiting for pod pod-c4364c44-e378-49d2-a065-5b15744144b0 to disappear May 25 00:18:20.387: INFO: Pod pod-c4364c44-e378-49d2-a065-5b15744144b0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:18:20.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-581" for this suite. • [SLOW TEST:6.218 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":119,"skipped":1562,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:18:20.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 25 00:18:25.138: INFO: Successfully updated pod "labelsupdate3e00ed1b-7de0-4a7d-b869-f64020fdc268" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:18:29.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6737" for this suite. • [SLOW TEST:8.798 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":120,"skipped":1621,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:18:29.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-2cdb435d-e5b1-4f53-aa33-84e8557ade69 STEP: Creating a pod to test consume configMaps May 25 00:18:29.311: INFO: Waiting up to 5m0s for pod "pod-configmaps-005db858-0464-4921-97ce-8fa04a8eba23" in namespace "configmap-4426" to be "Succeeded or Failed" May 25 00:18:29.361: INFO: Pod "pod-configmaps-005db858-0464-4921-97ce-8fa04a8eba23": Phase="Pending", Reason="", readiness=false. Elapsed: 49.71315ms May 25 00:18:31.412: INFO: Pod "pod-configmaps-005db858-0464-4921-97ce-8fa04a8eba23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100413916s May 25 00:18:33.514: INFO: Pod "pod-configmaps-005db858-0464-4921-97ce-8fa04a8eba23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.202902007s STEP: Saw pod success May 25 00:18:33.514: INFO: Pod "pod-configmaps-005db858-0464-4921-97ce-8fa04a8eba23" satisfied condition "Succeeded or Failed" May 25 00:18:33.517: INFO: Trying to get logs from node latest-worker pod pod-configmaps-005db858-0464-4921-97ce-8fa04a8eba23 container configmap-volume-test: STEP: delete the pod May 25 00:18:33.675: INFO: Waiting for pod pod-configmaps-005db858-0464-4921-97ce-8fa04a8eba23 to disappear May 25 00:18:33.693: INFO: Pod pod-configmaps-005db858-0464-4921-97ce-8fa04a8eba23 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:18:33.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4426" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":121,"skipped":1643,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:18:33.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:18:50.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1874" for this suite. • [SLOW TEST:16.375 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":288,"completed":122,"skipped":1654,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:18:50.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 00:18:50.797: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 00:18:52.809: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725962730, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725962730, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725962731, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725962730, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 00:18:55.870: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:18:56.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1846" for this suite. STEP: Destroying namespace "webhook-1846-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.109 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":288,"completed":123,"skipped":1693,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:18:56.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-0246e2f8-e9e5-436b-9189-155f6fd2afd8 in namespace container-probe-873 May 25 00:19:00.337: INFO: Started pod test-webserver-0246e2f8-e9e5-436b-9189-155f6fd2afd8 in namespace container-probe-873 STEP: checking the pod's current state and verifying that restartCount is present May 25 00:19:00.340: INFO: Initial restart count of pod test-webserver-0246e2f8-e9e5-436b-9189-155f6fd2afd8 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:23:01.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-873" for this suite. • [SLOW TEST:244.878 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":124,"skipped":1699,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:23:01.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:23:01.357: INFO: Create a RollingUpdate DaemonSet May 25 00:23:01.361: INFO: Check that daemon pods launch on every node of the cluster May 25 00:23:01.370: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:23:01.438: INFO: Number of nodes with available pods: 0 May 25 00:23:01.438: INFO: Node latest-worker is running more than one daemon pod May 25 00:23:02.445: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:23:02.449: INFO: Number of nodes with available pods: 0 May 25 00:23:02.449: INFO: Node latest-worker is running more than one daemon pod May 25 00:23:03.529: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:23:03.533: INFO: Number of nodes with available pods: 0 May 25 00:23:03.533: INFO: Node latest-worker is running more than one daemon pod May 25 00:23:04.444: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:23:04.448: INFO: Number of nodes with available pods: 0 May 25 00:23:04.448: INFO: Node latest-worker is running more than one daemon pod May 25 00:23:05.444: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:23:05.448: INFO: Number of nodes with available pods: 1 May 25 00:23:05.448: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:23:06.443: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:23:06.447: INFO: Number of nodes with available pods: 2 May 25 00:23:06.447: INFO: Number of running nodes: 2, number of available pods: 2 May 25 00:23:06.447: INFO: Update the DaemonSet to trigger a rollout May 25 00:23:06.454: INFO: Updating DaemonSet daemon-set May 25 00:23:10.473: INFO: Roll back the DaemonSet before rollout is complete May 25 00:23:10.541: INFO: Updating DaemonSet daemon-set May 25 00:23:10.541: INFO: Make sure DaemonSet rollback is complete May 25 00:23:10.553: INFO: Wrong image for pod: daemon-set-xkgqr. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 25 00:23:10.553: INFO: Pod daemon-set-xkgqr is not available May 25 00:23:10.587: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:23:11.592: INFO: Wrong image for pod: daemon-set-xkgqr. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 25 00:23:11.592: INFO: Pod daemon-set-xkgqr is not available May 25 00:23:11.597: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:23:12.619: INFO: Wrong image for pod: daemon-set-xkgqr. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 25 00:23:12.619: INFO: Pod daemon-set-xkgqr is not available May 25 00:23:12.622: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:23:13.592: INFO: Wrong image for pod: daemon-set-xkgqr. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 25 00:23:13.592: INFO: Pod daemon-set-xkgqr is not available May 25 00:23:13.597: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:23:14.592: INFO: Pod daemon-set-wchdz is not available May 25 00:23:14.596: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2860, will wait for the garbage collector to delete the pods May 25 00:23:14.699: INFO: Deleting DaemonSet.extensions daemon-set took: 44.92322ms May 25 00:23:15.100: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.309843ms May 25 00:23:25.304: INFO: Number of nodes with available pods: 0 May 25 00:23:25.304: INFO: Number of running nodes: 0, number of available pods: 0 May 25 00:23:25.307: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2860/daemonsets","resourceVersion":"7418776"},"items":null} May 25 00:23:25.309: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2860/pods","resourceVersion":"7418776"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:23:25.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2860" for this suite. • [SLOW TEST:24.262 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":288,"completed":125,"skipped":1700,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:23:25.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition May 25 00:23:25.427: INFO: Waiting up to 5m0s for pod "var-expansion-9a0cdd5a-c2cd-45ef-9d58-601709d272b8" in namespace "var-expansion-7264" to be "Succeeded or Failed" May 25 00:23:25.430: INFO: Pod "var-expansion-9a0cdd5a-c2cd-45ef-9d58-601709d272b8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.124013ms May 25 00:23:27.486: INFO: Pod "var-expansion-9a0cdd5a-c2cd-45ef-9d58-601709d272b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059741051s May 25 00:23:29.505: INFO: Pod "var-expansion-9a0cdd5a-c2cd-45ef-9d58-601709d272b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078619205s STEP: Saw pod success May 25 00:23:29.505: INFO: Pod "var-expansion-9a0cdd5a-c2cd-45ef-9d58-601709d272b8" satisfied condition "Succeeded or Failed" May 25 00:23:29.508: INFO: Trying to get logs from node latest-worker2 pod var-expansion-9a0cdd5a-c2cd-45ef-9d58-601709d272b8 container dapi-container: STEP: delete the pod May 25 00:23:29.556: INFO: Waiting for pod var-expansion-9a0cdd5a-c2cd-45ef-9d58-601709d272b8 to disappear May 25 00:23:29.574: INFO: Pod var-expansion-9a0cdd5a-c2cd-45ef-9d58-601709d272b8 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:23:29.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7264" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":288,"completed":126,"skipped":1710,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:23:29.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token May 25 00:23:30.210: INFO: created pod pod-service-account-defaultsa May 25 00:23:30.210: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 25 00:23:30.232: INFO: created pod pod-service-account-mountsa May 25 00:23:30.232: INFO: pod pod-service-account-mountsa service account token volume mount: true May 25 00:23:30.302: INFO: created pod pod-service-account-nomountsa May 25 00:23:30.302: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 25 00:23:30.318: INFO: created pod pod-service-account-defaultsa-mountspec May 25 00:23:30.318: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 25 00:23:30.371: INFO: created pod pod-service-account-mountsa-mountspec May 25 00:23:30.371: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 25 00:23:30.441: INFO: created pod pod-service-account-nomountsa-mountspec May 25 00:23:30.441: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 25 00:23:30.454: INFO: created pod pod-service-account-defaultsa-nomountspec May 25 00:23:30.454: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 25 00:23:30.490: INFO: created pod pod-service-account-mountsa-nomountspec May 25 00:23:30.490: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 25 00:23:30.522: INFO: created pod pod-service-account-nomountsa-nomountspec May 25 00:23:30.522: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:23:30.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9933" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":288,"completed":127,"skipped":1727,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:23:30.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 25 00:23:30.923: INFO: >>> kubeConfig: /root/.kube/config May 25 00:23:33.889: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:23:47.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8326" for this suite. • [SLOW TEST:16.768 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":288,"completed":128,"skipped":1729,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:23:47.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5077 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-5077 I0525 00:23:47.683767 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5077, replica count: 2 I0525 00:23:50.734228 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 00:23:53.734499 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 00:23:53.734: INFO: Creating new exec pod May 25 00:23:58.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5077 execpodl8f89 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 25 00:23:59.015: INFO: stderr: "I0525 00:23:58.891396 2636 log.go:172] (0xc000c6f3f0) (0xc0006ebb80) Create stream\nI0525 00:23:58.891460 2636 log.go:172] (0xc000c6f3f0) (0xc0006ebb80) Stream added, broadcasting: 1\nI0525 00:23:58.895456 2636 log.go:172] (0xc000c6f3f0) Reply frame received for 1\nI0525 00:23:58.895485 2636 log.go:172] (0xc000c6f3f0) (0xc00070a5a0) Create stream\nI0525 00:23:58.895492 2636 log.go:172] (0xc000c6f3f0) (0xc00070a5a0) Stream added, broadcasting: 3\nI0525 00:23:58.896633 2636 log.go:172] (0xc000c6f3f0) Reply frame received for 3\nI0525 00:23:58.896678 2636 log.go:172] (0xc000c6f3f0) (0xc00071ae60) Create stream\nI0525 00:23:58.896692 2636 log.go:172] (0xc000c6f3f0) (0xc00071ae60) Stream added, broadcasting: 5\nI0525 00:23:58.898133 2636 log.go:172] (0xc000c6f3f0) Reply frame received for 5\nI0525 00:23:58.998835 2636 log.go:172] (0xc000c6f3f0) Data frame received for 5\nI0525 00:23:58.998872 2636 log.go:172] (0xc00071ae60) (5) Data frame handling\nI0525 00:23:58.998894 2636 log.go:172] (0xc00071ae60) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0525 00:23:59.008332 2636 log.go:172] (0xc000c6f3f0) Data frame received for 3\nI0525 00:23:59.008385 2636 log.go:172] (0xc00070a5a0) (3) Data frame handling\nI0525 00:23:59.008421 2636 log.go:172] (0xc000c6f3f0) Data frame received for 5\nI0525 00:23:59.008444 2636 log.go:172] (0xc00071ae60) (5) Data frame handling\nI0525 00:23:59.008472 2636 log.go:172] (0xc00071ae60) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0525 00:23:59.008582 2636 log.go:172] (0xc000c6f3f0) Data frame received for 5\nI0525 00:23:59.008603 2636 log.go:172] (0xc00071ae60) (5) Data frame handling\nI0525 00:23:59.010683 2636 log.go:172] (0xc000c6f3f0) Data frame received for 1\nI0525 00:23:59.010717 2636 log.go:172] (0xc0006ebb80) (1) Data frame handling\nI0525 00:23:59.010749 2636 log.go:172] (0xc0006ebb80) (1) Data frame sent\nI0525 00:23:59.010765 2636 log.go:172] (0xc000c6f3f0) (0xc0006ebb80) Stream removed, broadcasting: 1\nI0525 00:23:59.010824 2636 log.go:172] (0xc000c6f3f0) Go away received\nI0525 00:23:59.011107 2636 log.go:172] (0xc000c6f3f0) (0xc0006ebb80) Stream removed, broadcasting: 1\nI0525 00:23:59.011127 2636 log.go:172] (0xc000c6f3f0) (0xc00070a5a0) Stream removed, broadcasting: 3\nI0525 00:23:59.011138 2636 log.go:172] (0xc000c6f3f0) (0xc00071ae60) Stream removed, broadcasting: 5\n" May 25 00:23:59.015: INFO: stdout: "" May 25 00:23:59.016: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5077 execpodl8f89 -- /bin/sh -x -c nc -zv -t -w 2 10.99.108.21 80' May 25 00:23:59.232: INFO: stderr: "I0525 00:23:59.149436 2656 log.go:172] (0xc0000ed8c0) (0xc0004b3180) Create stream\nI0525 00:23:59.149485 2656 log.go:172] (0xc0000ed8c0) (0xc0004b3180) Stream added, broadcasting: 1\nI0525 00:23:59.152123 2656 log.go:172] (0xc0000ed8c0) Reply frame received for 1\nI0525 00:23:59.152174 2656 log.go:172] (0xc0000ed8c0) (0xc0003bcd20) Create stream\nI0525 00:23:59.152189 2656 log.go:172] (0xc0000ed8c0) (0xc0003bcd20) Stream added, broadcasting: 3\nI0525 00:23:59.153722 2656 log.go:172] (0xc0000ed8c0) Reply frame received for 3\nI0525 00:23:59.153759 2656 log.go:172] (0xc0000ed8c0) (0xc000375ae0) Create stream\nI0525 00:23:59.153773 2656 log.go:172] (0xc0000ed8c0) (0xc000375ae0) Stream added, broadcasting: 5\nI0525 00:23:59.154848 2656 log.go:172] (0xc0000ed8c0) Reply frame received for 5\nI0525 00:23:59.226740 2656 log.go:172] (0xc0000ed8c0) Data frame received for 3\nI0525 00:23:59.226815 2656 log.go:172] (0xc0003bcd20) (3) Data frame handling\nI0525 00:23:59.226855 2656 log.go:172] (0xc0000ed8c0) Data frame received for 5\nI0525 00:23:59.226882 2656 log.go:172] (0xc000375ae0) (5) Data frame handling\nI0525 00:23:59.226903 2656 log.go:172] (0xc000375ae0) (5) Data frame sent\nI0525 00:23:59.226916 2656 log.go:172] (0xc0000ed8c0) Data frame received for 5\nI0525 00:23:59.226926 2656 log.go:172] (0xc000375ae0) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.108.21 80\nConnection to 10.99.108.21 80 port [tcp/http] succeeded!\nI0525 00:23:59.228028 2656 log.go:172] (0xc0000ed8c0) Data frame received for 1\nI0525 00:23:59.228046 2656 log.go:172] (0xc0004b3180) (1) Data frame handling\nI0525 00:23:59.228057 2656 log.go:172] (0xc0004b3180) (1) Data frame sent\nI0525 00:23:59.228072 2656 log.go:172] (0xc0000ed8c0) (0xc0004b3180) Stream removed, broadcasting: 1\nI0525 00:23:59.228094 2656 log.go:172] (0xc0000ed8c0) Go away received\nI0525 00:23:59.228307 2656 log.go:172] (0xc0000ed8c0) (0xc0004b3180) Stream removed, broadcasting: 1\nI0525 00:23:59.228320 2656 log.go:172] (0xc0000ed8c0) (0xc0003bcd20) Stream removed, broadcasting: 3\nI0525 00:23:59.228326 2656 log.go:172] (0xc0000ed8c0) (0xc000375ae0) Stream removed, broadcasting: 5\n" May 25 00:23:59.232: INFO: stdout: "" May 25 00:23:59.232: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:23:59.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5077" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:11.793 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":288,"completed":129,"skipped":1752,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:23:59.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:23:59.362: INFO: Waiting up to 5m0s for pod "busybox-user-65534-66c81a7c-23b0-481a-9ae3-bfaf8598a379" in namespace "security-context-test-8181" to be "Succeeded or Failed" May 25 00:23:59.418: INFO: Pod "busybox-user-65534-66c81a7c-23b0-481a-9ae3-bfaf8598a379": Phase="Pending", Reason="", readiness=false. Elapsed: 55.931933ms May 25 00:24:01.422: INFO: Pod "busybox-user-65534-66c81a7c-23b0-481a-9ae3-bfaf8598a379": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060418323s May 25 00:24:03.427: INFO: Pod "busybox-user-65534-66c81a7c-23b0-481a-9ae3-bfaf8598a379": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06465846s May 25 00:24:03.427: INFO: Pod "busybox-user-65534-66c81a7c-23b0-481a-9ae3-bfaf8598a379" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:24:03.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8181" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":130,"skipped":1768,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:24:03.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:24:03.573: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:24:07.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8382" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":288,"completed":131,"skipped":1821,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:24:07.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0525 00:24:08.975747 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 25 00:24:08.975: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:24:08.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2808" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":288,"completed":132,"skipped":1850,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:24:08.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:24:09.118: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-8111 I0525 00:24:09.141564 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8111, replica count: 1 I0525 00:24:10.191974 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 00:24:11.192174 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 00:24:12.192405 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 00:24:13.192630 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 00:24:13.368: INFO: Created: latency-svc-ts222 May 25 00:24:13.375: INFO: Got endpoints: latency-svc-ts222 [82.409926ms] May 25 00:24:13.434: INFO: Created: latency-svc-c9xjd May 25 00:24:13.459: INFO: Got endpoints: latency-svc-c9xjd [83.612215ms] May 25 00:24:13.517: INFO: Created: latency-svc-n979c May 25 00:24:13.529: INFO: Got endpoints: latency-svc-n979c [153.750014ms] May 25 00:24:13.563: INFO: Created: latency-svc-8c4gd May 25 00:24:13.574: INFO: Got endpoints: latency-svc-8c4gd [198.655241ms] May 25 00:24:13.606: INFO: Created: latency-svc-rxzx5 May 25 00:24:13.616: INFO: Got endpoints: latency-svc-rxzx5 [240.508423ms] May 25 00:24:13.699: INFO: Created: latency-svc-bmmtk May 25 00:24:13.714: INFO: Got endpoints: latency-svc-bmmtk [339.172039ms] May 25 00:24:13.734: INFO: Created: latency-svc-n269v May 25 00:24:13.755: INFO: Got endpoints: latency-svc-n269v [379.765964ms] May 25 00:24:13.811: INFO: Created: latency-svc-l9s65 May 25 00:24:13.829: INFO: Got endpoints: latency-svc-l9s65 [454.31883ms] May 25 00:24:13.869: INFO: Created: latency-svc-wqplt May 25 00:24:13.888: INFO: Got endpoints: latency-svc-wqplt [512.711757ms] May 25 00:24:13.966: INFO: Created: latency-svc-5lqwn May 25 00:24:13.979: INFO: Got endpoints: latency-svc-5lqwn [604.056292ms] May 25 00:24:14.028: INFO: Created: latency-svc-5dmdl May 25 00:24:14.128: INFO: Got endpoints: latency-svc-5dmdl [752.907581ms] May 25 00:24:14.160: INFO: Created: latency-svc-bcgvl May 25 00:24:14.163: INFO: Got endpoints: latency-svc-bcgvl [788.247879ms] May 25 00:24:14.220: INFO: Created: latency-svc-q6dkn May 25 00:24:14.278: INFO: Got endpoints: latency-svc-q6dkn [902.93701ms] May 25 00:24:14.343: INFO: Created: latency-svc-96nwm May 25 00:24:14.410: INFO: Got endpoints: latency-svc-96nwm [1.034419379s] May 25 00:24:14.490: INFO: Created: latency-svc-xf52c May 25 00:24:14.542: INFO: Got endpoints: latency-svc-xf52c [1.166422437s] May 25 00:24:14.601: INFO: Created: latency-svc-t2gcc May 25 00:24:14.621: INFO: Got endpoints: latency-svc-t2gcc [1.246188478s] May 25 00:24:14.711: INFO: Created: latency-svc-ssdx6 May 25 00:24:14.723: INFO: Got endpoints: latency-svc-ssdx6 [1.264404322s] May 25 00:24:14.753: INFO: Created: latency-svc-kszx5 May 25 00:24:14.787: INFO: Got endpoints: latency-svc-kszx5 [1.257809249s] May 25 00:24:14.853: INFO: Created: latency-svc-q58tk May 25 00:24:14.883: INFO: Got endpoints: latency-svc-q58tk [1.309170373s] May 25 00:24:14.921: INFO: Created: latency-svc-pvz8k May 25 00:24:14.978: INFO: Got endpoints: latency-svc-pvz8k [1.362345366s] May 25 00:24:15.018: INFO: Created: latency-svc-7dc44 May 25 00:24:15.033: INFO: Got endpoints: latency-svc-7dc44 [1.318817681s] May 25 00:24:15.063: INFO: Created: latency-svc-cgvmw May 25 00:24:15.116: INFO: Got endpoints: latency-svc-cgvmw [1.360376801s] May 25 00:24:15.159: INFO: Created: latency-svc-znsbf May 25 00:24:15.171: INFO: Got endpoints: latency-svc-znsbf [1.341143495s] May 25 00:24:15.269: INFO: Created: latency-svc-xqsjr May 25 00:24:15.298: INFO: Got endpoints: latency-svc-xqsjr [1.409268384s] May 25 00:24:15.339: INFO: Created: latency-svc-jsmjk May 25 00:24:15.358: INFO: Got endpoints: latency-svc-jsmjk [1.379046169s] May 25 00:24:15.410: INFO: Created: latency-svc-f2cdz May 25 00:24:15.431: INFO: Created: latency-svc-q9htj May 25 00:24:15.431: INFO: Got endpoints: latency-svc-f2cdz [1.30320072s] May 25 00:24:15.451: INFO: Got endpoints: latency-svc-q9htj [1.287912679s] May 25 00:24:15.479: INFO: Created: latency-svc-5pbjk May 25 00:24:15.498: INFO: Got endpoints: latency-svc-5pbjk [1.219711549s] May 25 00:24:15.571: INFO: Created: latency-svc-4rdws May 25 00:24:15.597: INFO: Created: latency-svc-4p95k May 25 00:24:15.597: INFO: Got endpoints: latency-svc-4rdws [1.187208096s] May 25 00:24:15.621: INFO: Got endpoints: latency-svc-4p95k [1.079594505s] May 25 00:24:15.740: INFO: Created: latency-svc-2zxth May 25 00:24:15.750: INFO: Got endpoints: latency-svc-2zxth [1.128922239s] May 25 00:24:15.774: INFO: Created: latency-svc-8k69x May 25 00:24:15.794: INFO: Got endpoints: latency-svc-8k69x [1.071184298s] May 25 00:24:15.931: INFO: Created: latency-svc-bdtdt May 25 00:24:15.943: INFO: Got endpoints: latency-svc-bdtdt [1.156583553s] May 25 00:24:15.965: INFO: Created: latency-svc-j5v2n May 25 00:24:15.980: INFO: Got endpoints: latency-svc-j5v2n [1.096355198s] May 25 00:24:16.013: INFO: Created: latency-svc-7mgj6 May 25 00:24:16.086: INFO: Got endpoints: latency-svc-7mgj6 [1.107518866s] May 25 00:24:16.136: INFO: Created: latency-svc-xxg6x May 25 00:24:16.175: INFO: Got endpoints: latency-svc-xxg6x [1.141576876s] May 25 00:24:16.260: INFO: Created: latency-svc-8kz9f May 25 00:24:16.293: INFO: Got endpoints: latency-svc-8kz9f [1.176991413s] May 25 00:24:16.398: INFO: Created: latency-svc-qpvzs May 25 00:24:16.402: INFO: Got endpoints: latency-svc-qpvzs [1.231283767s] May 25 00:24:16.487: INFO: Created: latency-svc-tt99j May 25 00:24:16.577: INFO: Got endpoints: latency-svc-tt99j [1.279384125s] May 25 00:24:16.604: INFO: Created: latency-svc-sjm74 May 25 00:24:16.635: INFO: Got endpoints: latency-svc-sjm74 [1.276016103s] May 25 00:24:16.733: INFO: Created: latency-svc-8jnbc May 25 00:24:16.736: INFO: Got endpoints: latency-svc-8jnbc [1.305008466s] May 25 00:24:16.776: INFO: Created: latency-svc-bqkfm May 25 00:24:16.793: INFO: Got endpoints: latency-svc-bqkfm [1.341909892s] May 25 00:24:16.817: INFO: Created: latency-svc-mgdw4 May 25 00:24:16.894: INFO: Got endpoints: latency-svc-mgdw4 [1.396342431s] May 25 00:24:16.946: INFO: Created: latency-svc-qd4st May 25 00:24:16.977: INFO: Got endpoints: latency-svc-qd4st [1.380081361s] May 25 00:24:17.051: INFO: Created: latency-svc-dmt4p May 25 00:24:17.084: INFO: Got endpoints: latency-svc-dmt4p [1.46255654s] May 25 00:24:17.126: INFO: Created: latency-svc-z5kv6 May 25 00:24:17.136: INFO: Got endpoints: latency-svc-z5kv6 [1.385667855s] May 25 00:24:17.219: INFO: Created: latency-svc-7jmfj May 25 00:24:17.261: INFO: Got endpoints: latency-svc-7jmfj [1.466460526s] May 25 00:24:17.296: INFO: Created: latency-svc-42pw5 May 25 00:24:17.374: INFO: Got endpoints: latency-svc-42pw5 [1.430220713s] May 25 00:24:17.375: INFO: Created: latency-svc-lfkfl May 25 00:24:17.381: INFO: Got endpoints: latency-svc-lfkfl [1.401511128s] May 25 00:24:17.434: INFO: Created: latency-svc-qm2fc May 25 00:24:17.454: INFO: Got endpoints: latency-svc-qm2fc [1.367619566s] May 25 00:24:17.523: INFO: Created: latency-svc-s5pzt May 25 00:24:17.555: INFO: Got endpoints: latency-svc-s5pzt [1.380064745s] May 25 00:24:17.588: INFO: Created: latency-svc-kr7f5 May 25 00:24:17.604: INFO: Got endpoints: latency-svc-kr7f5 [1.311569304s] May 25 00:24:17.667: INFO: Created: latency-svc-2c9rq May 25 00:24:17.690: INFO: Got endpoints: latency-svc-2c9rq [1.287856262s] May 25 00:24:17.722: INFO: Created: latency-svc-zlxbq May 25 00:24:17.737: INFO: Got endpoints: latency-svc-zlxbq [1.159632389s] May 25 00:24:17.758: INFO: Created: latency-svc-kk985 May 25 00:24:17.804: INFO: Got endpoints: latency-svc-kk985 [1.169845955s] May 25 00:24:17.818: INFO: Created: latency-svc-f8qvm May 25 00:24:17.834: INFO: Got endpoints: latency-svc-f8qvm [1.09733433s] May 25 00:24:17.858: INFO: Created: latency-svc-dv774 May 25 00:24:17.984: INFO: Got endpoints: latency-svc-dv774 [1.190958914s] May 25 00:24:17.998: INFO: Created: latency-svc-spdfz May 25 00:24:18.022: INFO: Got endpoints: latency-svc-spdfz [1.127867927s] May 25 00:24:18.084: INFO: Created: latency-svc-fr7tf May 25 00:24:18.141: INFO: Got endpoints: latency-svc-fr7tf [1.163999116s] May 25 00:24:18.207: INFO: Created: latency-svc-sfdxg May 25 00:24:18.213: INFO: Got endpoints: latency-svc-sfdxg [1.129376141s] May 25 00:24:18.302: INFO: Created: latency-svc-g7zbr May 25 00:24:18.326: INFO: Got endpoints: latency-svc-g7zbr [1.189891731s] May 25 00:24:18.380: INFO: Created: latency-svc-5h65t May 25 00:24:18.433: INFO: Got endpoints: latency-svc-5h65t [1.172170324s] May 25 00:24:18.459: INFO: Created: latency-svc-sxw5w May 25 00:24:18.492: INFO: Got endpoints: latency-svc-sxw5w [1.11806499s] May 25 00:24:18.571: INFO: Created: latency-svc-gg8wm May 25 00:24:18.602: INFO: Got endpoints: latency-svc-gg8wm [1.220564613s] May 25 00:24:18.602: INFO: Created: latency-svc-rmfjm May 25 00:24:18.644: INFO: Got endpoints: latency-svc-rmfjm [1.190272919s] May 25 00:24:18.711: INFO: Created: latency-svc-qm85g May 25 00:24:18.748: INFO: Got endpoints: latency-svc-qm85g [1.192981928s] May 25 00:24:18.790: INFO: Created: latency-svc-jm9xh May 25 00:24:18.871: INFO: Got endpoints: latency-svc-jm9xh [1.266845345s] May 25 00:24:18.902: INFO: Created: latency-svc-288fk May 25 00:24:18.935: INFO: Got endpoints: latency-svc-288fk [1.245061951s] May 25 00:24:19.014: INFO: Created: latency-svc-mwkfd May 25 00:24:19.044: INFO: Got endpoints: latency-svc-mwkfd [1.307069389s] May 25 00:24:19.078: INFO: Created: latency-svc-5jgls May 25 00:24:19.087: INFO: Got endpoints: latency-svc-5jgls [1.28220314s] May 25 00:24:19.165: INFO: Created: latency-svc-zvvf2 May 25 00:24:19.189: INFO: Got endpoints: latency-svc-zvvf2 [1.355747138s] May 25 00:24:19.226: INFO: Created: latency-svc-t4wjw May 25 00:24:19.244: INFO: Got endpoints: latency-svc-t4wjw [1.260030132s] May 25 00:24:19.302: INFO: Created: latency-svc-p7q2s May 25 00:24:19.305: INFO: Got endpoints: latency-svc-p7q2s [1.282821738s] May 25 00:24:19.387: INFO: Created: latency-svc-4sdcw May 25 00:24:19.433: INFO: Got endpoints: latency-svc-4sdcw [1.292059325s] May 25 00:24:19.460: INFO: Created: latency-svc-84xtd May 25 00:24:19.492: INFO: Got endpoints: latency-svc-84xtd [1.279013598s] May 25 00:24:19.583: INFO: Created: latency-svc-6tntp May 25 00:24:19.587: INFO: Got endpoints: latency-svc-6tntp [1.260764345s] May 25 00:24:19.639: INFO: Created: latency-svc-qmd6t May 25 00:24:19.658: INFO: Got endpoints: latency-svc-qmd6t [1.22482053s] May 25 00:24:19.740: INFO: Created: latency-svc-4kj2j May 25 00:24:19.755: INFO: Got endpoints: latency-svc-4kj2j [1.263106619s] May 25 00:24:19.798: INFO: Created: latency-svc-8tz65 May 25 00:24:19.813: INFO: Got endpoints: latency-svc-8tz65 [1.211722363s] May 25 00:24:19.921: INFO: Created: latency-svc-tqcrr May 25 00:24:19.964: INFO: Got endpoints: latency-svc-tqcrr [1.319701431s] May 25 00:24:20.026: INFO: Created: latency-svc-c9zt7 May 25 00:24:20.031: INFO: Got endpoints: latency-svc-c9zt7 [1.282678978s] May 25 00:24:20.115: INFO: Created: latency-svc-q2qfj May 25 00:24:20.224: INFO: Got endpoints: latency-svc-q2qfj [1.352244319s] May 25 00:24:20.284: INFO: Created: latency-svc-6rx7l May 25 00:24:20.307: INFO: Got endpoints: latency-svc-6rx7l [1.371992848s] May 25 00:24:20.406: INFO: Created: latency-svc-snx8x May 25 00:24:20.421: INFO: Got endpoints: latency-svc-snx8x [1.377177897s] May 25 00:24:20.457: INFO: Created: latency-svc-xnxb6 May 25 00:24:20.479: INFO: Got endpoints: latency-svc-xnxb6 [1.392346391s] May 25 00:24:20.547: INFO: Created: latency-svc-n4mtf May 25 00:24:20.551: INFO: Got endpoints: latency-svc-n4mtf [1.361404198s] May 25 00:24:20.617: INFO: Created: latency-svc-hqbp8 May 25 00:24:20.632: INFO: Got endpoints: latency-svc-hqbp8 [1.38798161s] May 25 00:24:20.738: INFO: Created: latency-svc-vdhbc May 25 00:24:20.763: INFO: Got endpoints: latency-svc-vdhbc [1.458111624s] May 25 00:24:20.853: INFO: Created: latency-svc-lcbht May 25 00:24:20.867: INFO: Got endpoints: latency-svc-lcbht [1.433910585s] May 25 00:24:20.935: INFO: Created: latency-svc-2dwcj May 25 00:24:20.946: INFO: Got endpoints: latency-svc-2dwcj [1.453185007s] May 25 00:24:21.008: INFO: Created: latency-svc-5zxp7 May 25 00:24:21.013: INFO: Got endpoints: latency-svc-5zxp7 [1.426096528s] May 25 00:24:21.045: INFO: Created: latency-svc-9dctw May 25 00:24:21.061: INFO: Got endpoints: latency-svc-9dctw [1.40316439s] May 25 00:24:21.093: INFO: Created: latency-svc-cfpcb May 25 00:24:21.182: INFO: Got endpoints: latency-svc-cfpcb [1.427091938s] May 25 00:24:21.212: INFO: Created: latency-svc-tnrvv May 25 00:24:21.223: INFO: Got endpoints: latency-svc-tnrvv [1.409753219s] May 25 00:24:21.253: INFO: Created: latency-svc-l6452 May 25 00:24:21.343: INFO: Got endpoints: latency-svc-l6452 [1.379733964s] May 25 00:24:21.369: INFO: Created: latency-svc-qqrf4 May 25 00:24:21.398: INFO: Got endpoints: latency-svc-qqrf4 [1.367797153s] May 25 00:24:21.422: INFO: Created: latency-svc-vqwss May 25 00:24:21.511: INFO: Got endpoints: latency-svc-vqwss [1.287456367s] May 25 00:24:21.514: INFO: Created: latency-svc-snmvq May 25 00:24:21.525: INFO: Got endpoints: latency-svc-snmvq [1.218254595s] May 25 00:24:21.579: INFO: Created: latency-svc-8wbd7 May 25 00:24:21.673: INFO: Got endpoints: latency-svc-8wbd7 [1.251749523s] May 25 00:24:21.714: INFO: Created: latency-svc-jz8pf May 25 00:24:21.730: INFO: Got endpoints: latency-svc-jz8pf [1.250708977s] May 25 00:24:21.756: INFO: Created: latency-svc-l74zd May 25 00:24:21.816: INFO: Got endpoints: latency-svc-l74zd [1.265486522s] May 25 00:24:21.834: INFO: Created: latency-svc-ldj4r May 25 00:24:21.867: INFO: Got endpoints: latency-svc-ldj4r [1.23472124s] May 25 00:24:21.897: INFO: Created: latency-svc-qrhs4 May 25 00:24:21.914: INFO: Got endpoints: latency-svc-qrhs4 [1.150434169s] May 25 00:24:21.966: INFO: Created: latency-svc-9gg7t May 25 00:24:21.973: INFO: Got endpoints: latency-svc-9gg7t [1.105863207s] May 25 00:24:22.047: INFO: Created: latency-svc-b4ptr May 25 00:24:22.064: INFO: Got endpoints: latency-svc-b4ptr [1.118336586s] May 25 00:24:22.123: INFO: Created: latency-svc-249fb May 25 00:24:22.130: INFO: Got endpoints: latency-svc-249fb [1.117159256s] May 25 00:24:22.171: INFO: Created: latency-svc-x2jn7 May 25 00:24:22.197: INFO: Got endpoints: latency-svc-x2jn7 [1.135615008s] May 25 00:24:22.302: INFO: Created: latency-svc-nkgm8 May 25 00:24:22.305: INFO: Got endpoints: latency-svc-nkgm8 [1.122800474s] May 25 00:24:22.347: INFO: Created: latency-svc-d8hr2 May 25 00:24:22.359: INFO: Got endpoints: latency-svc-d8hr2 [1.136075669s] May 25 00:24:22.469: INFO: Created: latency-svc-qlppf May 25 00:24:22.500: INFO: Got endpoints: latency-svc-qlppf [1.156420132s] May 25 00:24:22.532: INFO: Created: latency-svc-wq8r8 May 25 00:24:22.553: INFO: Got endpoints: latency-svc-wq8r8 [1.154173305s] May 25 00:24:22.619: INFO: Created: latency-svc-k4ztx May 25 00:24:22.629: INFO: Got endpoints: latency-svc-k4ztx [1.117471188s] May 25 00:24:22.674: INFO: Created: latency-svc-xjxc6 May 25 00:24:22.692: INFO: Got endpoints: latency-svc-xjxc6 [1.166045056s] May 25 00:24:22.769: INFO: Created: latency-svc-8rrfb May 25 00:24:22.796: INFO: Got endpoints: latency-svc-8rrfb [1.122979933s] May 25 00:24:22.799: INFO: Created: latency-svc-gwnxk May 25 00:24:22.839: INFO: Got endpoints: latency-svc-gwnxk [1.108994871s] May 25 00:24:22.862: INFO: Created: latency-svc-qcnx8 May 25 00:24:22.924: INFO: Got endpoints: latency-svc-qcnx8 [1.10802562s] May 25 00:24:22.926: INFO: Created: latency-svc-lwd6c May 25 00:24:22.950: INFO: Got endpoints: latency-svc-lwd6c [1.083110891s] May 25 00:24:22.988: INFO: Created: latency-svc-lzbz9 May 25 00:24:23.005: INFO: Got endpoints: latency-svc-lzbz9 [1.091502454s] May 25 00:24:23.081: INFO: Created: latency-svc-zcbmg May 25 00:24:23.089: INFO: Got endpoints: latency-svc-zcbmg [1.115761729s] May 25 00:24:23.114: INFO: Created: latency-svc-66b99 May 25 00:24:23.126: INFO: Got endpoints: latency-svc-66b99 [1.061964492s] May 25 00:24:23.154: INFO: Created: latency-svc-zjx4n May 25 00:24:23.168: INFO: Got endpoints: latency-svc-zjx4n [1.038056002s] May 25 00:24:23.247: INFO: Created: latency-svc-tl8g2 May 25 00:24:23.253: INFO: Got endpoints: latency-svc-tl8g2 [1.05584137s] May 25 00:24:23.288: INFO: Created: latency-svc-whbgd May 25 00:24:23.307: INFO: Got endpoints: latency-svc-whbgd [1.002352549s] May 25 00:24:23.336: INFO: Created: latency-svc-jtlrz May 25 00:24:23.391: INFO: Got endpoints: latency-svc-jtlrz [1.03188278s] May 25 00:24:23.396: INFO: Created: latency-svc-xmg2x May 25 00:24:23.410: INFO: Got endpoints: latency-svc-xmg2x [910.152375ms] May 25 00:24:23.435: INFO: Created: latency-svc-b54cs May 25 00:24:23.453: INFO: Got endpoints: latency-svc-b54cs [900.33493ms] May 25 00:24:23.472: INFO: Created: latency-svc-7v6hx May 25 00:24:23.535: INFO: Got endpoints: latency-svc-7v6hx [906.894971ms] May 25 00:24:23.546: INFO: Created: latency-svc-r9926 May 25 00:24:23.566: INFO: Got endpoints: latency-svc-r9926 [874.20861ms] May 25 00:24:23.606: INFO: Created: latency-svc-wvb22 May 25 00:24:23.616: INFO: Got endpoints: latency-svc-wvb22 [819.592218ms] May 25 00:24:23.681: INFO: Created: latency-svc-rtp7g May 25 00:24:23.683: INFO: Got endpoints: latency-svc-rtp7g [843.793534ms] May 25 00:24:23.711: INFO: Created: latency-svc-ksp54 May 25 00:24:23.717: INFO: Got endpoints: latency-svc-ksp54 [792.832084ms] May 25 00:24:23.742: INFO: Created: latency-svc-f6dl6 May 25 00:24:23.747: INFO: Got endpoints: latency-svc-f6dl6 [797.056561ms] May 25 00:24:23.774: INFO: Created: latency-svc-vdrpm May 25 00:24:23.852: INFO: Got endpoints: latency-svc-vdrpm [846.863062ms] May 25 00:24:23.855: INFO: Created: latency-svc-zt6ls May 25 00:24:23.879: INFO: Got endpoints: latency-svc-zt6ls [790.142604ms] May 25 00:24:23.928: INFO: Created: latency-svc-6fh7t May 25 00:24:24.020: INFO: Got endpoints: latency-svc-6fh7t [893.9322ms] May 25 00:24:24.062: INFO: Created: latency-svc-ndf5m May 25 00:24:24.095: INFO: Got endpoints: latency-svc-ndf5m [926.996768ms] May 25 00:24:24.158: INFO: Created: latency-svc-vk5rt May 25 00:24:24.164: INFO: Got endpoints: latency-svc-vk5rt [910.752424ms] May 25 00:24:24.194: INFO: Created: latency-svc-cwhpg May 25 00:24:24.212: INFO: Got endpoints: latency-svc-cwhpg [904.837513ms] May 25 00:24:24.338: INFO: Created: latency-svc-tkt8n May 25 00:24:24.344: INFO: Got endpoints: latency-svc-tkt8n [953.14412ms] May 25 00:24:24.402: INFO: Created: latency-svc-mrhzb May 25 00:24:24.411: INFO: Got endpoints: latency-svc-mrhzb [1.000965347s] May 25 00:24:24.434: INFO: Created: latency-svc-6qb4c May 25 00:24:24.493: INFO: Got endpoints: latency-svc-6qb4c [1.039894563s] May 25 00:24:24.496: INFO: Created: latency-svc-sdkdq May 25 00:24:24.502: INFO: Got endpoints: latency-svc-sdkdq [966.204744ms] May 25 00:24:24.527: INFO: Created: latency-svc-f7nsb May 25 00:24:24.545: INFO: Got endpoints: latency-svc-f7nsb [979.591215ms] May 25 00:24:24.582: INFO: Created: latency-svc-g96k4 May 25 00:24:24.649: INFO: Got endpoints: latency-svc-g96k4 [1.033348456s] May 25 00:24:24.652: INFO: Created: latency-svc-qml6c May 25 00:24:24.666: INFO: Got endpoints: latency-svc-qml6c [982.92034ms] May 25 00:24:24.692: INFO: Created: latency-svc-qp85f May 25 00:24:24.708: INFO: Got endpoints: latency-svc-qp85f [990.818448ms] May 25 00:24:24.805: INFO: Created: latency-svc-spfr7 May 25 00:24:24.810: INFO: Got endpoints: latency-svc-spfr7 [1.062420878s] May 25 00:24:24.858: INFO: Created: latency-svc-jjgbt May 25 00:24:24.871: INFO: Got endpoints: latency-svc-jjgbt [1.019019983s] May 25 00:24:24.899: INFO: Created: latency-svc-r8tr6 May 25 00:24:24.960: INFO: Got endpoints: latency-svc-r8tr6 [1.080733457s] May 25 00:24:24.998: INFO: Created: latency-svc-8hqhm May 25 00:24:25.010: INFO: Got endpoints: latency-svc-8hqhm [990.134607ms] May 25 00:24:25.105: INFO: Created: latency-svc-g897c May 25 00:24:25.112: INFO: Got endpoints: latency-svc-g897c [1.016805583s] May 25 00:24:25.139: INFO: Created: latency-svc-h5ts5 May 25 00:24:25.155: INFO: Got endpoints: latency-svc-h5ts5 [991.504294ms] May 25 00:24:25.177: INFO: Created: latency-svc-wlvkl May 25 00:24:25.191: INFO: Got endpoints: latency-svc-wlvkl [979.126122ms] May 25 00:24:25.290: INFO: Created: latency-svc-6mb8w May 25 00:24:25.294: INFO: Got endpoints: latency-svc-6mb8w [949.245263ms] May 25 00:24:25.343: INFO: Created: latency-svc-f4xkf May 25 00:24:25.361: INFO: Got endpoints: latency-svc-f4xkf [949.456266ms] May 25 00:24:25.385: INFO: Created: latency-svc-w8cw5 May 25 00:24:25.451: INFO: Got endpoints: latency-svc-w8cw5 [958.114191ms] May 25 00:24:25.471: INFO: Created: latency-svc-mlt72 May 25 00:24:25.481: INFO: Got endpoints: latency-svc-mlt72 [979.643568ms] May 25 00:24:25.520: INFO: Created: latency-svc-chj7f May 25 00:24:25.530: INFO: Got endpoints: latency-svc-chj7f [984.283681ms] May 25 00:24:25.625: INFO: Created: latency-svc-fcxvx May 25 00:24:25.632: INFO: Got endpoints: latency-svc-fcxvx [982.742017ms] May 25 00:24:25.667: INFO: Created: latency-svc-cbr7d May 25 00:24:25.686: INFO: Got endpoints: latency-svc-cbr7d [1.020731157s] May 25 00:24:25.706: INFO: Created: latency-svc-75cb2 May 25 00:24:25.723: INFO: Got endpoints: latency-svc-75cb2 [1.014265282s] May 25 00:24:25.793: INFO: Created: latency-svc-pv2wp May 25 00:24:25.797: INFO: Got endpoints: latency-svc-pv2wp [987.077293ms] May 25 00:24:25.831: INFO: Created: latency-svc-69zm9 May 25 00:24:25.865: INFO: Got endpoints: latency-svc-69zm9 [993.473158ms] May 25 00:24:25.942: INFO: Created: latency-svc-8w2kv May 25 00:24:25.999: INFO: Got endpoints: latency-svc-8w2kv [1.038845435s] May 25 00:24:26.000: INFO: Created: latency-svc-6vt9s May 25 00:24:26.030: INFO: Got endpoints: latency-svc-6vt9s [1.019356899s] May 25 00:24:26.092: INFO: Created: latency-svc-7n9db May 25 00:24:26.109: INFO: Got endpoints: latency-svc-7n9db [996.927992ms] May 25 00:24:26.141: INFO: Created: latency-svc-vbdvk May 25 00:24:26.248: INFO: Got endpoints: latency-svc-vbdvk [1.093090789s] May 25 00:24:26.299: INFO: Created: latency-svc-jt7zk May 25 00:24:26.335: INFO: Got endpoints: latency-svc-jt7zk [1.143567874s] May 25 00:24:26.423: INFO: Created: latency-svc-z4lvb May 25 00:24:26.453: INFO: Got endpoints: latency-svc-z4lvb [1.159155921s] May 25 00:24:26.479: INFO: Created: latency-svc-58ns5 May 25 00:24:26.547: INFO: Got endpoints: latency-svc-58ns5 [1.186601324s] May 25 00:24:26.575: INFO: Created: latency-svc-j86tv May 25 00:24:26.618: INFO: Got endpoints: latency-svc-j86tv [1.16689405s] May 25 00:24:26.645: INFO: Created: latency-svc-l7bkt May 25 00:24:26.708: INFO: Got endpoints: latency-svc-l7bkt [1.226894014s] May 25 00:24:26.749: INFO: Created: latency-svc-9cdts May 25 00:24:26.767: INFO: Got endpoints: latency-svc-9cdts [1.236707985s] May 25 00:24:26.797: INFO: Created: latency-svc-hx9qf May 25 00:24:26.883: INFO: Got endpoints: latency-svc-hx9qf [1.250691637s] May 25 00:24:26.924: INFO: Created: latency-svc-tl455 May 25 00:24:26.924: INFO: Created: latency-svc-s55zn May 25 00:24:26.929: INFO: Got endpoints: latency-svc-s55zn [1.206488024s] May 25 00:24:26.929: INFO: Got endpoints: latency-svc-tl455 [1.242649866s] May 25 00:24:26.970: INFO: Created: latency-svc-sspjf May 25 00:24:27.038: INFO: Got endpoints: latency-svc-sspjf [1.241473784s] May 25 00:24:27.061: INFO: Created: latency-svc-fl54k May 25 00:24:27.080: INFO: Got endpoints: latency-svc-fl54k [1.215432699s] May 25 00:24:27.115: INFO: Created: latency-svc-nnqcc May 25 00:24:27.129: INFO: Got endpoints: latency-svc-nnqcc [1.129516995s] May 25 00:24:27.182: INFO: Created: latency-svc-qcwx6 May 25 00:24:27.185: INFO: Got endpoints: latency-svc-qcwx6 [1.155470433s] May 25 00:24:27.210: INFO: Created: latency-svc-7hhb5 May 25 00:24:27.235: INFO: Got endpoints: latency-svc-7hhb5 [1.125884509s] May 25 00:24:27.265: INFO: Created: latency-svc-wl5dt May 25 00:24:27.280: INFO: Got endpoints: latency-svc-wl5dt [1.031511943s] May 25 00:24:27.332: INFO: Created: latency-svc-vhw56 May 25 00:24:27.338: INFO: Got endpoints: latency-svc-vhw56 [1.002930911s] May 25 00:24:27.389: INFO: Created: latency-svc-nmr6t May 25 00:24:27.413: INFO: Got endpoints: latency-svc-nmr6t [959.509153ms] May 25 00:24:27.469: INFO: Created: latency-svc-s4jhl May 25 00:24:27.473: INFO: Got endpoints: latency-svc-s4jhl [925.27834ms] May 25 00:24:27.523: INFO: Created: latency-svc-qcnpc May 25 00:24:27.539: INFO: Got endpoints: latency-svc-qcnpc [921.221089ms] May 25 00:24:27.559: INFO: Created: latency-svc-sbspx May 25 00:24:27.638: INFO: Got endpoints: latency-svc-sbspx [929.116471ms] May 25 00:24:27.640: INFO: Created: latency-svc-xpw4p May 25 00:24:27.664: INFO: Got endpoints: latency-svc-xpw4p [897.489345ms] May 25 00:24:27.745: INFO: Created: latency-svc-n6wmz May 25 00:24:27.748: INFO: Got endpoints: latency-svc-n6wmz [865.624708ms] May 25 00:24:27.845: INFO: Created: latency-svc-945jd May 25 00:24:27.930: INFO: Got endpoints: latency-svc-945jd [1.001026323s] May 25 00:24:27.934: INFO: Created: latency-svc-qf78d May 25 00:24:27.955: INFO: Got endpoints: latency-svc-qf78d [1.025593015s] May 25 00:24:27.991: INFO: Created: latency-svc-s2vbg May 25 00:24:28.021: INFO: Got endpoints: latency-svc-s2vbg [982.803512ms] May 25 00:24:28.088: INFO: Created: latency-svc-n9cfl May 25 00:24:28.093: INFO: Got endpoints: latency-svc-n9cfl [1.013109957s] May 25 00:24:28.120: INFO: Created: latency-svc-4fmbg May 25 00:24:28.136: INFO: Got endpoints: latency-svc-4fmbg [1.007213519s] May 25 00:24:28.156: INFO: Created: latency-svc-t4k8w May 25 00:24:28.224: INFO: Got endpoints: latency-svc-t4k8w [1.039085581s] May 25 00:24:28.237: INFO: Created: latency-svc-vhqfw May 25 00:24:28.251: INFO: Got endpoints: latency-svc-vhqfw [1.016168324s] May 25 00:24:28.296: INFO: Created: latency-svc-c7rb4 May 25 00:24:28.318: INFO: Got endpoints: latency-svc-c7rb4 [1.038002419s] May 25 00:24:28.415: INFO: Created: latency-svc-cmzg9 May 25 00:24:28.420: INFO: Got endpoints: latency-svc-cmzg9 [1.082091597s] May 25 00:24:28.453: INFO: Created: latency-svc-hr2s5 May 25 00:24:28.469: INFO: Got endpoints: latency-svc-hr2s5 [1.05690057s] May 25 00:24:28.495: INFO: Created: latency-svc-xh5h5 May 25 00:24:28.595: INFO: Got endpoints: latency-svc-xh5h5 [1.122579609s] May 25 00:24:28.612: INFO: Created: latency-svc-8sbgk May 25 00:24:28.631: INFO: Got endpoints: latency-svc-8sbgk [1.09170383s] May 25 00:24:28.631: INFO: Latencies: [83.612215ms 153.750014ms 198.655241ms 240.508423ms 339.172039ms 379.765964ms 454.31883ms 512.711757ms 604.056292ms 752.907581ms 788.247879ms 790.142604ms 792.832084ms 797.056561ms 819.592218ms 843.793534ms 846.863062ms 865.624708ms 874.20861ms 893.9322ms 897.489345ms 900.33493ms 902.93701ms 904.837513ms 906.894971ms 910.152375ms 910.752424ms 921.221089ms 925.27834ms 926.996768ms 929.116471ms 949.245263ms 949.456266ms 953.14412ms 958.114191ms 959.509153ms 966.204744ms 979.126122ms 979.591215ms 979.643568ms 982.742017ms 982.803512ms 982.92034ms 984.283681ms 987.077293ms 990.134607ms 990.818448ms 991.504294ms 993.473158ms 996.927992ms 1.000965347s 1.001026323s 1.002352549s 1.002930911s 1.007213519s 1.013109957s 1.014265282s 1.016168324s 1.016805583s 1.019019983s 1.019356899s 1.020731157s 1.025593015s 1.031511943s 1.03188278s 1.033348456s 1.034419379s 1.038002419s 1.038056002s 1.038845435s 1.039085581s 1.039894563s 1.05584137s 1.05690057s 1.061964492s 1.062420878s 1.071184298s 1.079594505s 1.080733457s 1.082091597s 1.083110891s 1.091502454s 1.09170383s 1.093090789s 1.096355198s 1.09733433s 1.105863207s 1.107518866s 1.10802562s 1.108994871s 1.115761729s 1.117159256s 1.117471188s 1.11806499s 1.118336586s 1.122579609s 1.122800474s 1.122979933s 1.125884509s 1.127867927s 1.128922239s 1.129376141s 1.129516995s 1.135615008s 1.136075669s 1.141576876s 1.143567874s 1.150434169s 1.154173305s 1.155470433s 1.156420132s 1.156583553s 1.159155921s 1.159632389s 1.163999116s 1.166045056s 1.166422437s 1.16689405s 1.169845955s 1.172170324s 1.176991413s 1.186601324s 1.187208096s 1.189891731s 1.190272919s 1.190958914s 1.192981928s 1.206488024s 1.211722363s 1.215432699s 1.218254595s 1.219711549s 1.220564613s 1.22482053s 1.226894014s 1.231283767s 1.23472124s 1.236707985s 1.241473784s 1.242649866s 1.245061951s 1.246188478s 1.250691637s 1.250708977s 1.251749523s 1.257809249s 1.260030132s 1.260764345s 1.263106619s 1.264404322s 1.265486522s 1.266845345s 1.276016103s 1.279013598s 1.279384125s 1.28220314s 1.282678978s 1.282821738s 1.287456367s 1.287856262s 1.287912679s 1.292059325s 1.30320072s 1.305008466s 1.307069389s 1.309170373s 1.311569304s 1.318817681s 1.319701431s 1.341143495s 1.341909892s 1.352244319s 1.355747138s 1.360376801s 1.361404198s 1.362345366s 1.367619566s 1.367797153s 1.371992848s 1.377177897s 1.379046169s 1.379733964s 1.380064745s 1.380081361s 1.385667855s 1.38798161s 1.392346391s 1.396342431s 1.401511128s 1.40316439s 1.409268384s 1.409753219s 1.426096528s 1.427091938s 1.430220713s 1.433910585s 1.453185007s 1.458111624s 1.46255654s 1.466460526s] May 25 00:24:28.631: INFO: 50 %ile: 1.128922239s May 25 00:24:28.631: INFO: 90 %ile: 1.379046169s May 25 00:24:28.631: INFO: 99 %ile: 1.46255654s May 25 00:24:28.631: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:24:28.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8111" for this suite. • [SLOW TEST:19.678 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":288,"completed":133,"skipped":1889,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:24:28.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-1913 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1913 to expose endpoints map[] May 25 00:24:28.888: INFO: Get endpoints failed (63.803152ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 25 00:24:29.892: INFO: successfully validated that service endpoint-test2 in namespace services-1913 exposes endpoints map[] (1.067742091s elapsed) STEP: Creating pod pod1 in namespace services-1913 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1913 to expose endpoints map[pod1:[80]] May 25 00:24:34.038: INFO: successfully validated that service endpoint-test2 in namespace services-1913 exposes endpoints map[pod1:[80]] (4.139081593s elapsed) STEP: Creating pod pod2 in namespace services-1913 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1913 to expose endpoints map[pod1:[80] pod2:[80]] May 25 00:24:38.914: INFO: successfully validated that service endpoint-test2 in namespace services-1913 exposes endpoints map[pod1:[80] pod2:[80]] (4.869885991s elapsed) STEP: Deleting pod pod1 in namespace services-1913 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1913 to expose endpoints map[pod2:[80]] May 25 00:24:43.543: INFO: Unexpected endpoints: found map[20c6b78b-ecd3-4448-beb1-465b7baf474d:[80] f6d09621-69e2-4a56-807a-8abedfb7361e:[80]], expected map[pod2:[80]] (4.588042011s elapsed, will retry) May 25 00:24:47.766: INFO: successfully validated that service endpoint-test2 in namespace services-1913 exposes endpoints map[pod2:[80]] (8.810732701s elapsed) STEP: Deleting pod pod2 in namespace services-1913 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1913 to expose endpoints map[] May 25 00:24:51.858: INFO: Unexpected endpoints: found map[20c6b78b-ecd3-4448-beb1-465b7baf474d:[80]], expected map[] (4.061018221s elapsed, will retry) May 25 00:24:52.882: INFO: successfully validated that service endpoint-test2 in namespace services-1913 exposes endpoints map[] (5.085320108s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:24:52.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1913" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:24.467 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":288,"completed":134,"skipped":1919,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:24:53.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 25 00:24:53.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2776' May 25 00:24:53.722: INFO: stderr: "" May 25 00:24:53.722: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 25 00:24:54.763: INFO: Selector matched 1 pods for map[app:agnhost] May 25 00:24:54.763: INFO: Found 0 / 1 May 25 00:24:55.879: INFO: Selector matched 1 pods for map[app:agnhost] May 25 00:24:55.879: INFO: Found 0 / 1 May 25 00:24:56.739: INFO: Selector matched 1 pods for map[app:agnhost] May 25 00:24:56.739: INFO: Found 0 / 1 May 25 00:24:57.745: INFO: Selector matched 1 pods for map[app:agnhost] May 25 00:24:57.745: INFO: Found 0 / 1 May 25 00:24:58.788: INFO: Selector matched 1 pods for map[app:agnhost] May 25 00:24:58.788: INFO: Found 1 / 1 May 25 00:24:58.788: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 25 00:24:58.814: INFO: Selector matched 1 pods for map[app:agnhost] May 25 00:24:58.814: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 25 00:24:58.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-fv2rb --namespace=kubectl-2776 -p {"metadata":{"annotations":{"x":"y"}}}' May 25 00:24:58.985: INFO: stderr: "" May 25 00:24:58.985: INFO: stdout: "pod/agnhost-master-fv2rb patched\n" STEP: checking annotations May 25 00:24:59.006: INFO: Selector matched 1 pods for map[app:agnhost] May 25 00:24:59.007: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:24:59.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2776" for this suite. • [SLOW TEST:5.956 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1468 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":288,"completed":135,"skipped":1969,"failed":0} SSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:24:59.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-9136e3d8-f61f-4e58-9f72-3e39606a1f89 STEP: Creating secret with name s-test-opt-upd-9ba672be-47ac-4569-ad3a-31f8cacb7016 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-9136e3d8-f61f-4e58-9f72-3e39606a1f89 STEP: Updating secret s-test-opt-upd-9ba672be-47ac-4569-ad3a-31f8cacb7016 STEP: Creating secret with name s-test-opt-create-a8221b99-7b80-4c16-893e-4b2ab2609cb9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:26:40.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9260" for this suite. • [SLOW TEST:101.102 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":136,"skipped":1973,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:26:40.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 25 00:26:50.411: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 25 00:26:50.436: INFO: Pod pod-with-poststart-exec-hook still exists May 25 00:26:52.436: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 25 00:26:52.441: INFO: Pod pod-with-poststart-exec-hook still exists May 25 00:26:54.436: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 25 00:26:54.441: INFO: Pod pod-with-poststart-exec-hook still exists May 25 00:26:56.436: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 25 00:26:56.441: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:26:56.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1221" for this suite. • [SLOW TEST:16.262 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":288,"completed":137,"skipped":2026,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:26:56.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 25 00:27:04.698: INFO: 10 pods remaining May 25 00:27:04.698: INFO: 0 pods has nil DeletionTimestamp May 25 00:27:04.698: INFO: May 25 00:27:05.472: INFO: 0 pods remaining May 25 00:27:05.472: INFO: 0 pods has nil DeletionTimestamp May 25 00:27:05.472: INFO: May 25 00:27:06.477: INFO: 0 pods remaining May 25 00:27:06.477: INFO: 0 pods has nil DeletionTimestamp May 25 00:27:06.477: INFO: May 25 00:27:07.088: INFO: 0 pods remaining May 25 00:27:07.088: INFO: 0 pods has nil DeletionTimestamp May 25 00:27:07.088: INFO: STEP: Gathering metrics W0525 00:27:08.527501 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 25 00:27:08.527: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:27:08.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3542" for this suite. • [SLOW TEST:12.275 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":288,"completed":138,"skipped":2037,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:27:08.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311 STEP: creating the pod May 25 00:27:09.272: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4961' May 25 00:27:13.559: INFO: stderr: "" May 25 00:27:13.559: INFO: stdout: "pod/pause created\n" May 25 00:27:13.559: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 25 00:27:13.559: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4961" to be "running and ready" May 25 00:27:13.582: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 22.969112ms May 25 00:27:15.716: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156872275s May 25 00:27:17.719: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.160434267s May 25 00:27:17.720: INFO: Pod "pause" satisfied condition "running and ready" May 25 00:27:17.720: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod May 25 00:27:17.720: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4961' May 25 00:27:17.821: INFO: stderr: "" May 25 00:27:17.821: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 25 00:27:17.821: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4961' May 25 00:27:17.931: INFO: stderr: "" May 25 00:27:17.931: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 25 00:27:17.931: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4961' May 25 00:27:18.051: INFO: stderr: "" May 25 00:27:18.051: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 25 00:27:18.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4961' May 25 00:27:18.160: INFO: stderr: "" May 25 00:27:18.160: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: using delete to clean up resources May 25 00:27:18.160: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4961' May 25 00:27:18.309: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 00:27:18.309: INFO: stdout: "pod \"pause\" force deleted\n" May 25 00:27:18.309: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4961' May 25 00:27:18.673: INFO: stderr: "No resources found in kubectl-4961 namespace.\n" May 25 00:27:18.673: INFO: stdout: "" May 25 00:27:18.673: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4961 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 25 00:27:18.781: INFO: stderr: "" May 25 00:27:18.781: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:27:18.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4961" for this suite. • [SLOW TEST:10.061 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":288,"completed":139,"skipped":2051,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:27:18.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:27:18.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2600" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":288,"completed":140,"skipped":2052,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:27:19.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 25 00:27:19.158: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 25 00:27:29.778: INFO: >>> kubeConfig: /root/.kube/config May 25 00:27:32.700: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:27:44.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2667" for this suite. • [SLOW TEST:25.314 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":288,"completed":141,"skipped":2091,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:27:44.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 25 00:27:44.498: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:27:51.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1734" for this suite. • [SLOW TEST:7.458 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":288,"completed":142,"skipped":2103,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:27:51.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components May 25 00:27:51.883: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 25 00:27:51.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4699' May 25 00:27:52.233: INFO: stderr: "" May 25 00:27:52.233: INFO: stdout: "service/agnhost-slave created\n" May 25 00:27:52.233: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 25 00:27:52.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4699' May 25 00:27:52.589: INFO: stderr: "" May 25 00:27:52.589: INFO: stdout: "service/agnhost-master created\n" May 25 00:27:52.589: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 25 00:27:52.589: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4699' May 25 00:27:52.959: INFO: stderr: "" May 25 00:27:52.960: INFO: stdout: "service/frontend created\n" May 25 00:27:52.960: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 25 00:27:52.960: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4699' May 25 00:27:53.254: INFO: stderr: "" May 25 00:27:53.254: INFO: stdout: "deployment.apps/frontend created\n" May 25 00:27:53.255: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 25 00:27:53.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4699' May 25 00:27:53.669: INFO: stderr: "" May 25 00:27:53.669: INFO: stdout: "deployment.apps/agnhost-master created\n" May 25 00:27:53.670: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 25 00:27:53.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4699' May 25 00:27:53.957: INFO: stderr: "" May 25 00:27:53.957: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 25 00:27:53.957: INFO: Waiting for all frontend pods to be Running. May 25 00:28:04.008: INFO: Waiting for frontend to serve content. May 25 00:28:04.025: INFO: Trying to add a new entry to the guestbook. May 25 00:28:04.034: INFO: Verifying that added entry can be retrieved. May 25 00:28:04.041: INFO: Failed to get response from guestbook. err: , response: {"data":""} STEP: using delete to clean up resources May 25 00:28:09.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4699' May 25 00:28:09.266: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 00:28:09.266: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 25 00:28:09.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4699' May 25 00:28:09.499: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 00:28:09.499: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 25 00:28:09.499: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4699' May 25 00:28:09.652: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 00:28:09.652: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 25 00:28:09.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4699' May 25 00:28:09.748: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 00:28:09.748: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 25 00:28:09.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4699' May 25 00:28:09.868: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 00:28:09.868: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 25 00:28:09.868: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4699' May 25 00:28:10.644: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 00:28:10.644: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:28:10.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4699" for this suite. • [SLOW TEST:18.846 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":288,"completed":143,"skipped":2143,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:28:10.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-274c5160-b8df-4297-94e2-6b682986b2ae STEP: Creating secret with name s-test-opt-upd-e5652629-71c6-4dea-8636-f3ec3972dfa8 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-274c5160-b8df-4297-94e2-6b682986b2ae STEP: Updating secret s-test-opt-upd-e5652629-71c6-4dea-8636-f3ec3972dfa8 STEP: Creating secret with name s-test-opt-create-04cebb19-331e-402f-a2fa-e319033a8b96 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:28:24.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2795" for this suite. • [SLOW TEST:13.542 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":144,"skipped":2177,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:28:24.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-79cca4b7-ae51-4108-b9b6-b4326a2d7d2d in namespace container-probe-7522 May 25 00:28:28.347: INFO: Started pod liveness-79cca4b7-ae51-4108-b9b6-b4326a2d7d2d in namespace container-probe-7522 STEP: checking the pod's current state and verifying that restartCount is present May 25 00:28:28.368: INFO: Initial restart count of pod liveness-79cca4b7-ae51-4108-b9b6-b4326a2d7d2d is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:32:29.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7522" for this suite. • [SLOW TEST:244.923 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":288,"completed":145,"skipped":2210,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:32:29.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 25 00:32:29.452: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e882708b-1f7a-41ac-b02f-e662f44ecce4" in namespace "projected-3031" to be "Succeeded or Failed" May 25 00:32:29.635: INFO: Pod "downwardapi-volume-e882708b-1f7a-41ac-b02f-e662f44ecce4": Phase="Pending", Reason="", readiness=false. Elapsed: 182.521907ms May 25 00:32:31.639: INFO: Pod "downwardapi-volume-e882708b-1f7a-41ac-b02f-e662f44ecce4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186690853s May 25 00:32:33.644: INFO: Pod "downwardapi-volume-e882708b-1f7a-41ac-b02f-e662f44ecce4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.191370547s STEP: Saw pod success May 25 00:32:33.644: INFO: Pod "downwardapi-volume-e882708b-1f7a-41ac-b02f-e662f44ecce4" satisfied condition "Succeeded or Failed" May 25 00:32:33.647: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-e882708b-1f7a-41ac-b02f-e662f44ecce4 container client-container: STEP: delete the pod May 25 00:32:33.688: INFO: Waiting for pod downwardapi-volume-e882708b-1f7a-41ac-b02f-e662f44ecce4 to disappear May 25 00:32:33.703: INFO: Pod downwardapi-volume-e882708b-1f7a-41ac-b02f-e662f44ecce4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:32:33.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3031" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":146,"skipped":2264,"failed":0} ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:32:33.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1972 May 25 00:32:37.850: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1972 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 25 00:32:38.121: INFO: stderr: "I0525 00:32:37.978297 3139 log.go:172] (0xc0009b8000) (0xc000670460) Create stream\nI0525 00:32:37.978361 3139 log.go:172] (0xc0009b8000) (0xc000670460) Stream added, broadcasting: 1\nI0525 00:32:37.980884 3139 log.go:172] (0xc0009b8000) Reply frame received for 1\nI0525 00:32:37.980912 3139 log.go:172] (0xc0009b8000) (0xc00023f9a0) Create stream\nI0525 00:32:37.980920 3139 log.go:172] (0xc0009b8000) (0xc00023f9a0) Stream added, broadcasting: 3\nI0525 00:32:37.981887 3139 log.go:172] (0xc0009b8000) Reply frame received for 3\nI0525 00:32:37.981956 3139 log.go:172] (0xc0009b8000) (0xc00069a820) Create stream\nI0525 00:32:37.981987 3139 log.go:172] (0xc0009b8000) (0xc00069a820) Stream added, broadcasting: 5\nI0525 00:32:37.983022 3139 log.go:172] (0xc0009b8000) Reply frame received for 5\nI0525 00:32:38.083194 3139 log.go:172] (0xc0009b8000) Data frame received for 5\nI0525 00:32:38.083225 3139 log.go:172] (0xc00069a820) (5) Data frame handling\nI0525 00:32:38.083246 3139 log.go:172] (0xc00069a820) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0525 00:32:38.112809 3139 log.go:172] (0xc0009b8000) Data frame received for 3\nI0525 00:32:38.112843 3139 log.go:172] (0xc00023f9a0) (3) Data frame handling\nI0525 00:32:38.112887 3139 log.go:172] (0xc00023f9a0) (3) Data frame sent\nI0525 00:32:38.113024 3139 log.go:172] (0xc0009b8000) Data frame received for 5\nI0525 00:32:38.113042 3139 log.go:172] (0xc00069a820) (5) Data frame handling\nI0525 00:32:38.113366 3139 log.go:172] (0xc0009b8000) Data frame received for 3\nI0525 00:32:38.113387 3139 log.go:172] (0xc00023f9a0) (3) Data frame handling\nI0525 00:32:38.115353 3139 log.go:172] (0xc0009b8000) Data frame received for 1\nI0525 00:32:38.115367 3139 log.go:172] (0xc000670460) (1) Data frame handling\nI0525 00:32:38.115381 3139 log.go:172] (0xc000670460) (1) Data frame sent\nI0525 00:32:38.115444 3139 log.go:172] (0xc0009b8000) (0xc000670460) Stream removed, broadcasting: 1\nI0525 00:32:38.115535 3139 log.go:172] (0xc0009b8000) Go away received\nI0525 00:32:38.115975 3139 log.go:172] (0xc0009b8000) (0xc000670460) Stream removed, broadcasting: 1\nI0525 00:32:38.115998 3139 log.go:172] (0xc0009b8000) (0xc00023f9a0) Stream removed, broadcasting: 3\nI0525 00:32:38.116013 3139 log.go:172] (0xc0009b8000) (0xc00069a820) Stream removed, broadcasting: 5\n" May 25 00:32:38.121: INFO: stdout: "iptables" May 25 00:32:38.121: INFO: proxyMode: iptables May 25 00:32:38.127: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 25 00:32:38.135: INFO: Pod kube-proxy-mode-detector still exists May 25 00:32:40.135: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 25 00:32:40.140: INFO: Pod kube-proxy-mode-detector still exists May 25 00:32:42.135: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 25 00:32:42.140: INFO: Pod kube-proxy-mode-detector still exists May 25 00:32:44.135: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 25 00:32:44.140: INFO: Pod kube-proxy-mode-detector still exists May 25 00:32:46.135: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 25 00:32:46.139: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-1972 STEP: creating replication controller affinity-nodeport-timeout in namespace services-1972 I0525 00:32:46.253697 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-1972, replica count: 3 I0525 00:32:49.304182 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 00:32:52.304406 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 00:32:52.314: INFO: Creating new exec pod May 25 00:32:57.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1972 execpod-affinityfwbqk -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 25 00:32:57.551: INFO: stderr: "I0525 00:32:57.459402 3160 log.go:172] (0xc0009e7600) (0xc000a5a320) Create stream\nI0525 00:32:57.459443 3160 log.go:172] (0xc0009e7600) (0xc000a5a320) Stream added, broadcasting: 1\nI0525 00:32:57.463377 3160 log.go:172] (0xc0009e7600) Reply frame received for 1\nI0525 00:32:57.463407 3160 log.go:172] (0xc0009e7600) (0xc000678aa0) Create stream\nI0525 00:32:57.463415 3160 log.go:172] (0xc0009e7600) (0xc000678aa0) Stream added, broadcasting: 3\nI0525 00:32:57.464174 3160 log.go:172] (0xc0009e7600) Reply frame received for 3\nI0525 00:32:57.464198 3160 log.go:172] (0xc0009e7600) (0xc0006a7f40) Create stream\nI0525 00:32:57.464212 3160 log.go:172] (0xc0009e7600) (0xc0006a7f40) Stream added, broadcasting: 5\nI0525 00:32:57.464859 3160 log.go:172] (0xc0009e7600) Reply frame received for 5\nI0525 00:32:57.540613 3160 log.go:172] (0xc0009e7600) Data frame received for 5\nI0525 00:32:57.540645 3160 log.go:172] (0xc0006a7f40) (5) Data frame handling\nI0525 00:32:57.540662 3160 log.go:172] (0xc0006a7f40) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0525 00:32:57.544888 3160 log.go:172] (0xc0009e7600) Data frame received for 5\nI0525 00:32:57.544913 3160 log.go:172] (0xc0006a7f40) (5) Data frame handling\nI0525 00:32:57.544933 3160 log.go:172] (0xc0006a7f40) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0525 00:32:57.545032 3160 log.go:172] (0xc0009e7600) Data frame received for 5\nI0525 00:32:57.545056 3160 log.go:172] (0xc0006a7f40) (5) Data frame handling\nI0525 00:32:57.545292 3160 log.go:172] (0xc0009e7600) Data frame received for 3\nI0525 00:32:57.545312 3160 log.go:172] (0xc000678aa0) (3) Data frame handling\nI0525 00:32:57.546509 3160 log.go:172] (0xc0009e7600) Data frame received for 1\nI0525 00:32:57.546529 3160 log.go:172] (0xc000a5a320) (1) Data frame handling\nI0525 00:32:57.546537 3160 log.go:172] (0xc000a5a320) (1) Data frame sent\nI0525 00:32:57.546551 3160 log.go:172] (0xc0009e7600) (0xc000a5a320) Stream removed, broadcasting: 1\nI0525 00:32:57.546572 3160 log.go:172] (0xc0009e7600) Go away received\nI0525 00:32:57.546864 3160 log.go:172] (0xc0009e7600) (0xc000a5a320) Stream removed, broadcasting: 1\nI0525 00:32:57.546883 3160 log.go:172] (0xc0009e7600) (0xc000678aa0) Stream removed, broadcasting: 3\nI0525 00:32:57.546891 3160 log.go:172] (0xc0009e7600) (0xc0006a7f40) Stream removed, broadcasting: 5\n" May 25 00:32:57.551: INFO: stdout: "" May 25 00:32:57.552: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1972 execpod-affinityfwbqk -- /bin/sh -x -c nc -zv -t -w 2 10.109.55.35 80' May 25 00:32:57.757: INFO: stderr: "I0525 00:32:57.668783 3181 log.go:172] (0xc000a5f4a0) (0xc000ae0460) Create stream\nI0525 00:32:57.670092 3181 log.go:172] (0xc000a5f4a0) (0xc000ae0460) Stream added, broadcasting: 1\nI0525 00:32:57.674508 3181 log.go:172] (0xc000a5f4a0) Reply frame received for 1\nI0525 00:32:57.674543 3181 log.go:172] (0xc000a5f4a0) (0xc000562140) Create stream\nI0525 00:32:57.674559 3181 log.go:172] (0xc000a5f4a0) (0xc000562140) Stream added, broadcasting: 3\nI0525 00:32:57.675599 3181 log.go:172] (0xc000a5f4a0) Reply frame received for 3\nI0525 00:32:57.675667 3181 log.go:172] (0xc000a5f4a0) (0xc00042ad20) Create stream\nI0525 00:32:57.675703 3181 log.go:172] (0xc000a5f4a0) (0xc00042ad20) Stream added, broadcasting: 5\nI0525 00:32:57.676752 3181 log.go:172] (0xc000a5f4a0) Reply frame received for 5\nI0525 00:32:57.744973 3181 log.go:172] (0xc000a5f4a0) Data frame received for 3\nI0525 00:32:57.745023 3181 log.go:172] (0xc000562140) (3) Data frame handling\nI0525 00:32:57.745053 3181 log.go:172] (0xc000a5f4a0) Data frame received for 5\nI0525 00:32:57.745065 3181 log.go:172] (0xc00042ad20) (5) Data frame handling\nI0525 00:32:57.745077 3181 log.go:172] (0xc00042ad20) (5) Data frame sent\nI0525 00:32:57.745089 3181 log.go:172] (0xc000a5f4a0) Data frame received for 5\nI0525 00:32:57.745098 3181 log.go:172] (0xc00042ad20) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.55.35 80\nConnection to 10.109.55.35 80 port [tcp/http] succeeded!\nI0525 00:32:57.748182 3181 log.go:172] (0xc000a5f4a0) Data frame received for 1\nI0525 00:32:57.748228 3181 log.go:172] (0xc000ae0460) (1) Data frame handling\nI0525 00:32:57.748272 3181 log.go:172] (0xc000ae0460) (1) Data frame sent\nI0525 00:32:57.748451 3181 log.go:172] (0xc000a5f4a0) (0xc000ae0460) Stream removed, broadcasting: 1\nI0525 00:32:57.748607 3181 log.go:172] (0xc000a5f4a0) Go away received\nI0525 00:32:57.749061 3181 log.go:172] (0xc000a5f4a0) (0xc000ae0460) Stream removed, broadcasting: 1\nI0525 00:32:57.749356 3181 log.go:172] (0xc000a5f4a0) (0xc000562140) Stream removed, broadcasting: 3\nI0525 00:32:57.749406 3181 log.go:172] (0xc000a5f4a0) (0xc00042ad20) Stream removed, broadcasting: 5\n" May 25 00:32:57.757: INFO: stdout: "" May 25 00:32:57.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1972 execpod-affinityfwbqk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30029' May 25 00:32:57.953: INFO: stderr: "I0525 00:32:57.885033 3203 log.go:172] (0xc00098a840) (0xc0005341e0) Create stream\nI0525 00:32:57.885108 3203 log.go:172] (0xc00098a840) (0xc0005341e0) Stream added, broadcasting: 1\nI0525 00:32:57.888353 3203 log.go:172] (0xc00098a840) Reply frame received for 1\nI0525 00:32:57.888394 3203 log.go:172] (0xc00098a840) (0xc0004ded20) Create stream\nI0525 00:32:57.888407 3203 log.go:172] (0xc00098a840) (0xc0004ded20) Stream added, broadcasting: 3\nI0525 00:32:57.889610 3203 log.go:172] (0xc00098a840) Reply frame received for 3\nI0525 00:32:57.889659 3203 log.go:172] (0xc00098a840) (0xc000535180) Create stream\nI0525 00:32:57.889675 3203 log.go:172] (0xc00098a840) (0xc000535180) Stream added, broadcasting: 5\nI0525 00:32:57.890770 3203 log.go:172] (0xc00098a840) Reply frame received for 5\nI0525 00:32:57.948017 3203 log.go:172] (0xc00098a840) Data frame received for 5\nI0525 00:32:57.948059 3203 log.go:172] (0xc000535180) (5) Data frame handling\nI0525 00:32:57.948072 3203 log.go:172] (0xc000535180) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 30029\nConnection to 172.17.0.13 30029 port [tcp/30029] succeeded!\nI0525 00:32:57.948091 3203 log.go:172] (0xc00098a840) Data frame received for 3\nI0525 00:32:57.948101 3203 log.go:172] (0xc0004ded20) (3) Data frame handling\nI0525 00:32:57.948184 3203 log.go:172] (0xc00098a840) Data frame received for 5\nI0525 00:32:57.948206 3203 log.go:172] (0xc000535180) (5) Data frame handling\nI0525 00:32:57.950107 3203 log.go:172] (0xc00098a840) Data frame received for 1\nI0525 00:32:57.950121 3203 log.go:172] (0xc0005341e0) (1) Data frame handling\nI0525 00:32:57.950128 3203 log.go:172] (0xc0005341e0) (1) Data frame sent\nI0525 00:32:57.950137 3203 log.go:172] (0xc00098a840) (0xc0005341e0) Stream removed, broadcasting: 1\nI0525 00:32:57.950165 3203 log.go:172] (0xc00098a840) Go away received\nI0525 00:32:57.950473 3203 log.go:172] (0xc00098a840) (0xc0005341e0) Stream removed, broadcasting: 1\nI0525 00:32:57.950488 3203 log.go:172] (0xc00098a840) (0xc0004ded20) Stream removed, broadcasting: 3\nI0525 00:32:57.950493 3203 log.go:172] (0xc00098a840) (0xc000535180) Stream removed, broadcasting: 5\n" May 25 00:32:57.953: INFO: stdout: "" May 25 00:32:57.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1972 execpod-affinityfwbqk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30029' May 25 00:32:58.175: INFO: stderr: "I0525 00:32:58.088899 3224 log.go:172] (0xc00003a420) (0xc00051b220) Create stream\nI0525 00:32:58.088957 3224 log.go:172] (0xc00003a420) (0xc00051b220) Stream added, broadcasting: 1\nI0525 00:32:58.091617 3224 log.go:172] (0xc00003a420) Reply frame received for 1\nI0525 00:32:58.091652 3224 log.go:172] (0xc00003a420) (0xc00030b860) Create stream\nI0525 00:32:58.091660 3224 log.go:172] (0xc00003a420) (0xc00030b860) Stream added, broadcasting: 3\nI0525 00:32:58.092746 3224 log.go:172] (0xc00003a420) Reply frame received for 3\nI0525 00:32:58.092782 3224 log.go:172] (0xc00003a420) (0xc00030bae0) Create stream\nI0525 00:32:58.092800 3224 log.go:172] (0xc00003a420) (0xc00030bae0) Stream added, broadcasting: 5\nI0525 00:32:58.094000 3224 log.go:172] (0xc00003a420) Reply frame received for 5\nI0525 00:32:58.167993 3224 log.go:172] (0xc00003a420) Data frame received for 5\nI0525 00:32:58.168039 3224 log.go:172] (0xc00030bae0) (5) Data frame handling\nI0525 00:32:58.168058 3224 log.go:172] (0xc00030bae0) (5) Data frame sent\nI0525 00:32:58.168073 3224 log.go:172] (0xc00003a420) Data frame received for 5\nI0525 00:32:58.168084 3224 log.go:172] (0xc00030bae0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30029\nConnection to 172.17.0.12 30029 port [tcp/30029] succeeded!\nI0525 00:32:58.168115 3224 log.go:172] (0xc00003a420) Data frame received for 3\nI0525 00:32:58.168136 3224 log.go:172] (0xc00030b860) (3) Data frame handling\nI0525 00:32:58.169755 3224 log.go:172] (0xc00003a420) Data frame received for 1\nI0525 00:32:58.169776 3224 log.go:172] (0xc00051b220) (1) Data frame handling\nI0525 00:32:58.169803 3224 log.go:172] (0xc00051b220) (1) Data frame sent\nI0525 00:32:58.169827 3224 log.go:172] (0xc00003a420) (0xc00051b220) Stream removed, broadcasting: 1\nI0525 00:32:58.169849 3224 log.go:172] (0xc00003a420) Go away received\nI0525 00:32:58.170244 3224 log.go:172] (0xc00003a420) (0xc00051b220) Stream removed, broadcasting: 1\nI0525 00:32:58.170271 3224 log.go:172] (0xc00003a420) (0xc00030b860) Stream removed, broadcasting: 3\nI0525 00:32:58.170284 3224 log.go:172] (0xc00003a420) (0xc00030bae0) Stream removed, broadcasting: 5\n" May 25 00:32:58.176: INFO: stdout: "" May 25 00:32:58.176: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1972 execpod-affinityfwbqk -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30029/ ; done' May 25 00:32:58.563: INFO: stderr: "I0525 00:32:58.316159 3245 log.go:172] (0xc000bd3130) (0xc000a82500) Create stream\nI0525 00:32:58.316238 3245 log.go:172] (0xc000bd3130) (0xc000a82500) Stream added, broadcasting: 1\nI0525 00:32:58.323104 3245 log.go:172] (0xc000bd3130) Reply frame received for 1\nI0525 00:32:58.323148 3245 log.go:172] (0xc000bd3130) (0xc0005c0d20) Create stream\nI0525 00:32:58.323159 3245 log.go:172] (0xc000bd3130) (0xc0005c0d20) Stream added, broadcasting: 3\nI0525 00:32:58.323947 3245 log.go:172] (0xc000bd3130) Reply frame received for 3\nI0525 00:32:58.323972 3245 log.go:172] (0xc000bd3130) (0xc00059c460) Create stream\nI0525 00:32:58.323980 3245 log.go:172] (0xc000bd3130) (0xc00059c460) Stream added, broadcasting: 5\nI0525 00:32:58.325026 3245 log.go:172] (0xc000bd3130) Reply frame received for 5\nI0525 00:32:58.391660 3245 log.go:172] (0xc000bd3130) Data frame received for 5\nI0525 00:32:58.391698 3245 log.go:172] (0xc00059c460) (5) Data frame handling\nI0525 00:32:58.391711 3245 log.go:172] (0xc00059c460) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30029/\nI0525 00:32:58.391728 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.391737 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.391747 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.465633 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.465665 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.465685 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.466163 3245 log.go:172] (0xc000bd3130) Data frame received for 5\nI0525 00:32:58.466180 3245 log.go:172] (0xc00059c460) (5) Data frame handling\nI0525 00:32:58.466187 3245 log.go:172] (0xc00059c460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30029/\nI0525 00:32:58.466197 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.466202 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.466207 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.474179 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.474221 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.474245 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.475384 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.475400 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.475409 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.475438 3245 log.go:172] (0xc000bd3130) Data frame received for 5\nI0525 00:32:58.475469 3245 log.go:172] (0xc00059c460) (5) Data frame handling\nI0525 00:32:58.475506 3245 log.go:172] (0xc00059c460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30029/\nI0525 00:32:58.479022 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.479067 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.479089 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.479339 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.479358 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.479364 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.479383 3245 log.go:172] (0xc000bd3130) Data frame received for 5\nI0525 00:32:58.479403 3245 log.go:172] (0xc00059c460) (5) Data frame handling\nI0525 00:32:58.479420 3245 log.go:172] (0xc00059c460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30029/\nI0525 00:32:58.482724 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.482750 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.482768 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.483147 3245 log.go:172] (0xc000bd3130) Data frame received for 5\nI0525 00:32:58.483163 3245 log.go:172] (0xc00059c460) (5) Data frame handling\nI0525 00:32:58.483172 3245 log.go:172] (0xc00059c460) (5) Data frame sent\nI0525 00:32:58.483178 3245 log.go:172] (0xc000bd3130) Data frame received for 5\nI0525 00:32:58.483182 3245 log.go:172] (0xc00059c460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30029/\nI0525 00:32:58.483192 3245 log.go:172] (0xc00059c460) (5) Data frame sent\nI0525 00:32:58.483238 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.483249 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.483258 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.490384 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.490398 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.490409 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.491131 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.491152 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.491165 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.491246 3245 log.go:172] (0xc000bd3130) Data frame received for 5\nI0525 00:32:58.491268 3245 log.go:172] (0xc00059c460) (5) Data frame handling\nI0525 00:32:58.491336 3245 log.go:172] (0xc00059c460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30029/\nI0525 00:32:58.495269 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.495291 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.495312 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.495704 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.495722 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.495739 3245 log.go:172] (0xc000bd3130) Data frame received for 5\nI0525 00:32:58.495769 3245 log.go:172] (0xc00059c460) (5) Data frame handling\nI0525 00:32:58.495783 3245 log.go:172] (0xc00059c460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30029/\nI0525 00:32:58.495798 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.499914 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.499944 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.499969 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.500484 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.500507 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.500516 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.500525 3245 log.go:172] (0xc000bd3130) Data frame received for 5\nI0525 00:32:58.500539 3245 log.go:172] (0xc00059c460) (5) Data frame handling\nI0525 00:32:58.500546 3245 log.go:172] (0xc00059c460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30029/\nI0525 00:32:58.504272 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.504290 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.504310 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.504907 3245 log.go:172] (0xc000bd3130) Data frame received for 5\nI0525 00:32:58.504933 3245 log.go:172] (0xc00059c460) (5) Data frame handling\nI0525 00:32:58.504954 3245 log.go:172] (0xc00059c460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30029/\nI0525 00:32:58.504981 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.505003 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.505014 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.509736 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.509752 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.509765 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.510257 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.510282 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.510297 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.510316 3245 log.go:172] (0xc000bd3130) Data frame received for 5\nI0525 00:32:58.510329 3245 log.go:172] (0xc00059c460) (5) Data frame handling\nI0525 00:32:58.510342 3245 log.go:172] (0xc00059c460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30029/\nI0525 00:32:58.516775 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.516794 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.516809 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.517595 3245 log.go:172] (0xc000bd3130) Data frame received for 5\nI0525 00:32:58.517610 3245 log.go:172] (0xc00059c460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30029/\nI0525 00:32:58.517627 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.517662 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.517680 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.517703 3245 log.go:172] (0xc00059c460) (5) Data frame sent\nI0525 00:32:58.523142 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.523165 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.523181 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.523804 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.523834 3245 log.go:172] (0xc000bd3130) Data frame received for 5\nI0525 00:32:58.523862 3245 log.go:172] (0xc00059c460) (5) Data frame handling\nI0525 00:32:58.523878 3245 log.go:172] (0xc00059c460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30029/\nI0525 00:32:58.523903 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.523920 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.529652 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.529671 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.529684 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.530242 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.530253 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.530281 3245 log.go:172] (0xc000bd3130) Data frame received for 5\nI0525 00:32:58.530319 3245 log.go:172] (0xc00059c460) (5) Data frame handling\nI0525 00:32:58.530336 3245 log.go:172] (0xc00059c460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30029/\nI0525 00:32:58.530358 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.534163 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.534185 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.534195 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.534407 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.534437 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.534449 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.534469 3245 log.go:172] (0xc000bd3130) Data frame received for 5\nI0525 00:32:58.534479 3245 log.go:172] (0xc00059c460) (5) Data frame handling\nI0525 00:32:58.534488 3245 log.go:172] (0xc00059c460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0525 00:32:58.534496 3245 log.go:172] (0xc000bd3130) Data frame received for 5\nI0525 00:32:58.534512 3245 log.go:172] (0xc00059c460) (5) Data frame handling\nI0525 00:32:58.534525 3245 log.go:172] (0xc00059c460) (5) Data frame sent\n http://172.17.0.13:30029/\nI0525 00:32:58.541312 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.541346 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.541395 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.541926 3245 log.go:172] (0xc000bd3130) Data frame received for 5\nI0525 00:32:58.541966 3245 log.go:172] (0xc00059c460) (5) Data frame handling\nI0525 00:32:58.541983 3245 log.go:172] (0xc00059c460) (5) Data frame sent\nI0525 00:32:58.541994 3245 log.go:172] (0xc000bd3130) Data frame received for 5\nI0525 00:32:58.542012 3245 log.go:172] (0xc00059c460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30029/\nI0525 00:32:58.542042 3245 log.go:172] (0xc00059c460) (5) Data frame sent\nI0525 00:32:58.542065 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.542079 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.542096 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.549326 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.549413 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.549431 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.549441 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.549449 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.549470 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.549484 3245 log.go:172] (0xc000bd3130) Data frame received for 5\nI0525 00:32:58.549492 3245 log.go:172] (0xc00059c460) (5) Data frame handling\nI0525 00:32:58.549513 3245 log.go:172] (0xc00059c460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30029/\nI0525 00:32:58.556668 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.556686 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.556701 3245 log.go:172] (0xc0005c0d20) (3) Data frame sent\nI0525 00:32:58.557683 3245 log.go:172] (0xc000bd3130) Data frame received for 5\nI0525 00:32:58.557713 3245 log.go:172] (0xc00059c460) (5) Data frame handling\nI0525 00:32:58.557738 3245 log.go:172] (0xc000bd3130) Data frame received for 3\nI0525 00:32:58.557761 3245 log.go:172] (0xc0005c0d20) (3) Data frame handling\nI0525 00:32:58.559066 3245 log.go:172] (0xc000bd3130) Data frame received for 1\nI0525 00:32:58.559088 3245 log.go:172] (0xc000a82500) (1) Data frame handling\nI0525 00:32:58.559124 3245 log.go:172] (0xc000a82500) (1) Data frame sent\nI0525 00:32:58.559147 3245 log.go:172] (0xc000bd3130) (0xc000a82500) Stream removed, broadcasting: 1\nI0525 00:32:58.559166 3245 log.go:172] (0xc000bd3130) Go away received\nI0525 00:32:58.559592 3245 log.go:172] (0xc000bd3130) (0xc000a82500) Stream removed, broadcasting: 1\nI0525 00:32:58.559607 3245 log.go:172] (0xc000bd3130) (0xc0005c0d20) Stream removed, broadcasting: 3\nI0525 00:32:58.559614 3245 log.go:172] (0xc000bd3130) (0xc00059c460) Stream removed, broadcasting: 5\n" May 25 00:32:58.563: INFO: stdout: "\naffinity-nodeport-timeout-z2tlc\naffinity-nodeport-timeout-z2tlc\naffinity-nodeport-timeout-z2tlc\naffinity-nodeport-timeout-z2tlc\naffinity-nodeport-timeout-z2tlc\naffinity-nodeport-timeout-z2tlc\naffinity-nodeport-timeout-z2tlc\naffinity-nodeport-timeout-z2tlc\naffinity-nodeport-timeout-z2tlc\naffinity-nodeport-timeout-z2tlc\naffinity-nodeport-timeout-z2tlc\naffinity-nodeport-timeout-z2tlc\naffinity-nodeport-timeout-z2tlc\naffinity-nodeport-timeout-z2tlc\naffinity-nodeport-timeout-z2tlc\naffinity-nodeport-timeout-z2tlc" May 25 00:32:58.563: INFO: Received response from host: May 25 00:32:58.563: INFO: Received response from host: affinity-nodeport-timeout-z2tlc May 25 00:32:58.563: INFO: Received response from host: affinity-nodeport-timeout-z2tlc May 25 00:32:58.563: INFO: Received response from host: affinity-nodeport-timeout-z2tlc May 25 00:32:58.563: INFO: Received response from host: affinity-nodeport-timeout-z2tlc May 25 00:32:58.563: INFO: Received response from host: affinity-nodeport-timeout-z2tlc May 25 00:32:58.563: INFO: Received response from host: affinity-nodeport-timeout-z2tlc May 25 00:32:58.563: INFO: Received response from host: affinity-nodeport-timeout-z2tlc May 25 00:32:58.563: INFO: Received response from host: affinity-nodeport-timeout-z2tlc May 25 00:32:58.563: INFO: Received response from host: affinity-nodeport-timeout-z2tlc May 25 00:32:58.563: INFO: Received response from host: affinity-nodeport-timeout-z2tlc May 25 00:32:58.563: INFO: Received response from host: affinity-nodeport-timeout-z2tlc May 25 00:32:58.563: INFO: Received response from host: affinity-nodeport-timeout-z2tlc May 25 00:32:58.563: INFO: Received response from host: affinity-nodeport-timeout-z2tlc May 25 00:32:58.563: INFO: Received response from host: affinity-nodeport-timeout-z2tlc May 25 00:32:58.563: INFO: Received response from host: affinity-nodeport-timeout-z2tlc May 25 00:32:58.563: INFO: Received response from host: affinity-nodeport-timeout-z2tlc May 25 00:32:58.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1972 execpod-affinityfwbqk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:30029/' May 25 00:32:58.813: INFO: stderr: "I0525 00:32:58.726665 3267 log.go:172] (0xc0009c1340) (0xc000b121e0) Create stream\nI0525 00:32:58.726771 3267 log.go:172] (0xc0009c1340) (0xc000b121e0) Stream added, broadcasting: 1\nI0525 00:32:58.732064 3267 log.go:172] (0xc0009c1340) Reply frame received for 1\nI0525 00:32:58.732127 3267 log.go:172] (0xc0009c1340) (0xc0008245a0) Create stream\nI0525 00:32:58.732149 3267 log.go:172] (0xc0009c1340) (0xc0008245a0) Stream added, broadcasting: 3\nI0525 00:32:58.733038 3267 log.go:172] (0xc0009c1340) Reply frame received for 3\nI0525 00:32:58.733080 3267 log.go:172] (0xc0009c1340) (0xc0006261e0) Create stream\nI0525 00:32:58.733097 3267 log.go:172] (0xc0009c1340) (0xc0006261e0) Stream added, broadcasting: 5\nI0525 00:32:58.734060 3267 log.go:172] (0xc0009c1340) Reply frame received for 5\nI0525 00:32:58.801985 3267 log.go:172] (0xc0009c1340) Data frame received for 5\nI0525 00:32:58.802012 3267 log.go:172] (0xc0006261e0) (5) Data frame handling\nI0525 00:32:58.802034 3267 log.go:172] (0xc0006261e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30029/\nI0525 00:32:58.803870 3267 log.go:172] (0xc0009c1340) Data frame received for 3\nI0525 00:32:58.803905 3267 log.go:172] (0xc0008245a0) (3) Data frame handling\nI0525 00:32:58.803947 3267 log.go:172] (0xc0008245a0) (3) Data frame sent\nI0525 00:32:58.804419 3267 log.go:172] (0xc0009c1340) Data frame received for 3\nI0525 00:32:58.804471 3267 log.go:172] (0xc0008245a0) (3) Data frame handling\nI0525 00:32:58.804503 3267 log.go:172] (0xc0009c1340) Data frame received for 5\nI0525 00:32:58.804527 3267 log.go:172] (0xc0006261e0) (5) Data frame handling\nI0525 00:32:58.806481 3267 log.go:172] (0xc0009c1340) Data frame received for 1\nI0525 00:32:58.806508 3267 log.go:172] (0xc000b121e0) (1) Data frame handling\nI0525 00:32:58.806522 3267 log.go:172] (0xc000b121e0) (1) Data frame sent\nI0525 00:32:58.806536 3267 log.go:172] (0xc0009c1340) (0xc000b121e0) Stream removed, broadcasting: 1\nI0525 00:32:58.806776 3267 log.go:172] (0xc0009c1340) Go away received\nI0525 00:32:58.806980 3267 log.go:172] (0xc0009c1340) (0xc000b121e0) Stream removed, broadcasting: 1\nI0525 00:32:58.807013 3267 log.go:172] (0xc0009c1340) (0xc0008245a0) Stream removed, broadcasting: 3\nI0525 00:32:58.807029 3267 log.go:172] (0xc0009c1340) (0xc0006261e0) Stream removed, broadcasting: 5\n" May 25 00:32:58.813: INFO: stdout: "affinity-nodeport-timeout-z2tlc" May 25 00:33:13.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1972 execpod-affinityfwbqk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:30029/' May 25 00:33:14.052: INFO: stderr: "I0525 00:33:13.945522 3287 log.go:172] (0xc000bf7600) (0xc00075ce60) Create stream\nI0525 00:33:13.945572 3287 log.go:172] (0xc000bf7600) (0xc00075ce60) Stream added, broadcasting: 1\nI0525 00:33:13.947776 3287 log.go:172] (0xc000bf7600) Reply frame received for 1\nI0525 00:33:13.947802 3287 log.go:172] (0xc000bf7600) (0xc00056afa0) Create stream\nI0525 00:33:13.947809 3287 log.go:172] (0xc000bf7600) (0xc00056afa0) Stream added, broadcasting: 3\nI0525 00:33:13.948731 3287 log.go:172] (0xc000bf7600) Reply frame received for 3\nI0525 00:33:13.948769 3287 log.go:172] (0xc000bf7600) (0xc0007154a0) Create stream\nI0525 00:33:13.948780 3287 log.go:172] (0xc000bf7600) (0xc0007154a0) Stream added, broadcasting: 5\nI0525 00:33:13.949680 3287 log.go:172] (0xc000bf7600) Reply frame received for 5\nI0525 00:33:14.037728 3287 log.go:172] (0xc000bf7600) Data frame received for 5\nI0525 00:33:14.037756 3287 log.go:172] (0xc0007154a0) (5) Data frame handling\nI0525 00:33:14.037773 3287 log.go:172] (0xc0007154a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30029/\nI0525 00:33:14.043507 3287 log.go:172] (0xc000bf7600) Data frame received for 3\nI0525 00:33:14.043521 3287 log.go:172] (0xc00056afa0) (3) Data frame handling\nI0525 00:33:14.043532 3287 log.go:172] (0xc00056afa0) (3) Data frame sent\nI0525 00:33:14.044019 3287 log.go:172] (0xc000bf7600) Data frame received for 3\nI0525 00:33:14.044041 3287 log.go:172] (0xc00056afa0) (3) Data frame handling\nI0525 00:33:14.044223 3287 log.go:172] (0xc000bf7600) Data frame received for 5\nI0525 00:33:14.044233 3287 log.go:172] (0xc0007154a0) (5) Data frame handling\nI0525 00:33:14.046325 3287 log.go:172] (0xc000bf7600) Data frame received for 1\nI0525 00:33:14.046347 3287 log.go:172] (0xc00075ce60) (1) Data frame handling\nI0525 00:33:14.046357 3287 log.go:172] (0xc00075ce60) (1) Data frame sent\nI0525 00:33:14.046367 3287 log.go:172] (0xc000bf7600) (0xc00075ce60) Stream removed, broadcasting: 1\nI0525 00:33:14.046385 3287 log.go:172] (0xc000bf7600) Go away received\nI0525 00:33:14.046819 3287 log.go:172] (0xc000bf7600) (0xc00075ce60) Stream removed, broadcasting: 1\nI0525 00:33:14.046839 3287 log.go:172] (0xc000bf7600) (0xc00056afa0) Stream removed, broadcasting: 3\nI0525 00:33:14.046849 3287 log.go:172] (0xc000bf7600) (0xc0007154a0) Stream removed, broadcasting: 5\n" May 25 00:33:14.052: INFO: stdout: "affinity-nodeport-timeout-cl4l8" May 25 00:33:14.052: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-1972, will wait for the garbage collector to delete the pods May 25 00:33:14.186: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 8.394051ms May 25 00:33:14.686: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 500.283839ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:33:25.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1972" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:51.669 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":147,"skipped":2264,"failed":0} [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:33:25.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-6910 STEP: creating a selector STEP: Creating the service pods in kubernetes May 25 00:33:25.422: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 25 00:33:25.512: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 00:33:27.851: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 00:33:29.516: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 00:33:31.516: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 00:33:33.517: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 00:33:35.516: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 00:33:37.515: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 00:33:39.516: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 00:33:41.516: INFO: The status of Pod netserver-0 is Running (Ready = true) May 25 00:33:41.523: INFO: The status of Pod netserver-1 is Running (Ready = false) May 25 00:33:43.528: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 25 00:33:47.573: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.160:8080/dial?request=hostname&protocol=udp&host=10.244.1.159&port=8081&tries=1'] Namespace:pod-network-test-6910 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:33:47.573: INFO: >>> kubeConfig: /root/.kube/config I0525 00:33:47.602773 7 log.go:172] (0xc0026c1ef0) (0xc00125a6e0) Create stream I0525 00:33:47.602801 7 log.go:172] (0xc0026c1ef0) (0xc00125a6e0) Stream added, broadcasting: 1 I0525 00:33:47.604741 7 log.go:172] (0xc0026c1ef0) Reply frame received for 1 I0525 00:33:47.604780 7 log.go:172] (0xc0026c1ef0) (0xc000996000) Create stream I0525 00:33:47.604794 7 log.go:172] (0xc0026c1ef0) (0xc000996000) Stream added, broadcasting: 3 I0525 00:33:47.605945 7 log.go:172] (0xc0026c1ef0) Reply frame received for 3 I0525 00:33:47.606005 7 log.go:172] (0xc0026c1ef0) (0xc00125a820) Create stream I0525 00:33:47.606025 7 log.go:172] (0xc0026c1ef0) (0xc00125a820) Stream added, broadcasting: 5 I0525 00:33:47.606969 7 log.go:172] (0xc0026c1ef0) Reply frame received for 5 I0525 00:33:47.674345 7 log.go:172] (0xc0026c1ef0) Data frame received for 3 I0525 00:33:47.674378 7 log.go:172] (0xc000996000) (3) Data frame handling I0525 00:33:47.674404 7 log.go:172] (0xc000996000) (3) Data frame sent I0525 00:33:47.674714 7 log.go:172] (0xc0026c1ef0) Data frame received for 3 I0525 00:33:47.674740 7 log.go:172] (0xc0026c1ef0) Data frame received for 5 I0525 00:33:47.674769 7 log.go:172] (0xc00125a820) (5) Data frame handling I0525 00:33:47.674793 7 log.go:172] (0xc000996000) (3) Data frame handling I0525 00:33:47.676385 7 log.go:172] (0xc0026c1ef0) Data frame received for 1 I0525 00:33:47.676400 7 log.go:172] (0xc00125a6e0) (1) Data frame handling I0525 00:33:47.676409 7 log.go:172] (0xc00125a6e0) (1) Data frame sent I0525 00:33:47.676419 7 log.go:172] (0xc0026c1ef0) (0xc00125a6e0) Stream removed, broadcasting: 1 I0525 00:33:47.676435 7 log.go:172] (0xc0026c1ef0) Go away received I0525 00:33:47.676578 7 log.go:172] (0xc0026c1ef0) (0xc00125a6e0) Stream removed, broadcasting: 1 I0525 00:33:47.676613 7 log.go:172] (0xc0026c1ef0) (0xc000996000) Stream removed, broadcasting: 3 I0525 00:33:47.676635 7 log.go:172] (0xc0026c1ef0) (0xc00125a820) Stream removed, broadcasting: 5 May 25 00:33:47.676: INFO: Waiting for responses: map[] May 25 00:33:47.680: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.160:8080/dial?request=hostname&protocol=udp&host=10.244.2.161&port=8081&tries=1'] Namespace:pod-network-test-6910 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:33:47.680: INFO: >>> kubeConfig: /root/.kube/config I0525 00:33:47.711844 7 log.go:172] (0xc001e9e4d0) (0xc00125b400) Create stream I0525 00:33:47.711871 7 log.go:172] (0xc001e9e4d0) (0xc00125b400) Stream added, broadcasting: 1 I0525 00:33:47.713770 7 log.go:172] (0xc001e9e4d0) Reply frame received for 1 I0525 00:33:47.713818 7 log.go:172] (0xc001e9e4d0) (0xc00125b4a0) Create stream I0525 00:33:47.713833 7 log.go:172] (0xc001e9e4d0) (0xc00125b4a0) Stream added, broadcasting: 3 I0525 00:33:47.714967 7 log.go:172] (0xc001e9e4d0) Reply frame received for 3 I0525 00:33:47.715005 7 log.go:172] (0xc001e9e4d0) (0xc000b07720) Create stream I0525 00:33:47.715018 7 log.go:172] (0xc001e9e4d0) (0xc000b07720) Stream added, broadcasting: 5 I0525 00:33:47.715881 7 log.go:172] (0xc001e9e4d0) Reply frame received for 5 I0525 00:33:47.785322 7 log.go:172] (0xc001e9e4d0) Data frame received for 3 I0525 00:33:47.785365 7 log.go:172] (0xc00125b4a0) (3) Data frame handling I0525 00:33:47.785397 7 log.go:172] (0xc00125b4a0) (3) Data frame sent I0525 00:33:47.786027 7 log.go:172] (0xc001e9e4d0) Data frame received for 5 I0525 00:33:47.786041 7 log.go:172] (0xc000b07720) (5) Data frame handling I0525 00:33:47.786073 7 log.go:172] (0xc001e9e4d0) Data frame received for 3 I0525 00:33:47.786103 7 log.go:172] (0xc00125b4a0) (3) Data frame handling I0525 00:33:47.788249 7 log.go:172] (0xc001e9e4d0) Data frame received for 1 I0525 00:33:47.788321 7 log.go:172] (0xc00125b400) (1) Data frame handling I0525 00:33:47.788355 7 log.go:172] (0xc00125b400) (1) Data frame sent I0525 00:33:47.788383 7 log.go:172] (0xc001e9e4d0) (0xc00125b400) Stream removed, broadcasting: 1 I0525 00:33:47.788408 7 log.go:172] (0xc001e9e4d0) Go away received I0525 00:33:47.788574 7 log.go:172] (0xc001e9e4d0) (0xc00125b400) Stream removed, broadcasting: 1 I0525 00:33:47.788599 7 log.go:172] (0xc001e9e4d0) (0xc00125b4a0) Stream removed, broadcasting: 3 I0525 00:33:47.788613 7 log.go:172] (0xc001e9e4d0) (0xc000b07720) Stream removed, broadcasting: 5 May 25 00:33:47.788: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:33:47.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6910" for this suite. • [SLOW TEST:22.415 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":288,"completed":148,"skipped":2264,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:33:47.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 25 00:33:47.937: INFO: Waiting up to 5m0s for pod "pod-5fa25894-76c9-447a-bf3f-0263366a1924" in namespace "emptydir-2966" to be "Succeeded or Failed" May 25 00:33:47.940: INFO: Pod "pod-5fa25894-76c9-447a-bf3f-0263366a1924": Phase="Pending", Reason="", readiness=false. Elapsed: 3.277648ms May 25 00:33:49.944: INFO: Pod "pod-5fa25894-76c9-447a-bf3f-0263366a1924": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007391678s May 25 00:33:51.949: INFO: Pod "pod-5fa25894-76c9-447a-bf3f-0263366a1924": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011511749s STEP: Saw pod success May 25 00:33:51.949: INFO: Pod "pod-5fa25894-76c9-447a-bf3f-0263366a1924" satisfied condition "Succeeded or Failed" May 25 00:33:51.952: INFO: Trying to get logs from node latest-worker pod pod-5fa25894-76c9-447a-bf3f-0263366a1924 container test-container: STEP: delete the pod May 25 00:33:52.006: INFO: Waiting for pod pod-5fa25894-76c9-447a-bf3f-0263366a1924 to disappear May 25 00:33:52.012: INFO: Pod pod-5fa25894-76c9-447a-bf3f-0263366a1924 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:33:52.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2966" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":149,"skipped":2280,"failed":0} S ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:33:52.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-fa525e15-a7d3-4060-a5e3-1ed3c255416f STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:33:58.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6436" for this suite. • [SLOW TEST:6.111 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":150,"skipped":2281,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:33:58.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 25 00:34:08.360: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 00:34:08.383: INFO: Pod pod-with-prestop-exec-hook still exists May 25 00:34:10.383: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 00:34:10.388: INFO: Pod pod-with-prestop-exec-hook still exists May 25 00:34:12.383: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 00:34:12.388: INFO: Pod pod-with-prestop-exec-hook still exists May 25 00:34:14.383: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 00:34:14.388: INFO: Pod pod-with-prestop-exec-hook still exists May 25 00:34:16.383: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 00:34:16.386: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:34:16.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-309" for this suite. • [SLOW TEST:18.225 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":288,"completed":151,"skipped":2290,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:34:16.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4249 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4249 STEP: Creating statefulset with conflicting port in namespace statefulset-4249 STEP: Waiting until pod test-pod will start running in namespace statefulset-4249 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4249 May 25 00:34:22.684: INFO: Observed stateful pod in namespace: statefulset-4249, name: ss-0, uid: 42ddac0b-aa5f-497b-b3d5-3f6f4f8799d3, status phase: Pending. Waiting for statefulset controller to delete. May 25 00:34:23.151: INFO: Observed stateful pod in namespace: statefulset-4249, name: ss-0, uid: 42ddac0b-aa5f-497b-b3d5-3f6f4f8799d3, status phase: Failed. Waiting for statefulset controller to delete. May 25 00:34:23.165: INFO: Observed stateful pod in namespace: statefulset-4249, name: ss-0, uid: 42ddac0b-aa5f-497b-b3d5-3f6f4f8799d3, status phase: Failed. Waiting for statefulset controller to delete. May 25 00:34:23.188: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4249 STEP: Removing pod with conflicting port in namespace statefulset-4249 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4249 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 25 00:34:27.324: INFO: Deleting all statefulset in ns statefulset-4249 May 25 00:34:27.327: INFO: Scaling statefulset ss to 0 May 25 00:34:37.373: INFO: Waiting for statefulset status.replicas updated to 0 May 25 00:34:37.376: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:34:37.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4249" for this suite. • [SLOW TEST:21.018 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":288,"completed":152,"skipped":2308,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:34:37.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:34:37.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7958" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":288,"completed":153,"skipped":2312,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:34:37.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 25 00:34:37.681: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ac6ed7f-7c98-4d2b-aaeb-b5b721ed0fd3" in namespace "projected-9388" to be "Succeeded or Failed" May 25 00:34:37.688: INFO: Pod "downwardapi-volume-8ac6ed7f-7c98-4d2b-aaeb-b5b721ed0fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.561638ms May 25 00:34:39.693: INFO: Pod "downwardapi-volume-8ac6ed7f-7c98-4d2b-aaeb-b5b721ed0fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012203671s May 25 00:34:41.697: INFO: Pod "downwardapi-volume-8ac6ed7f-7c98-4d2b-aaeb-b5b721ed0fd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016442522s STEP: Saw pod success May 25 00:34:41.697: INFO: Pod "downwardapi-volume-8ac6ed7f-7c98-4d2b-aaeb-b5b721ed0fd3" satisfied condition "Succeeded or Failed" May 25 00:34:41.701: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-8ac6ed7f-7c98-4d2b-aaeb-b5b721ed0fd3 container client-container: STEP: delete the pod May 25 00:34:41.775: INFO: Waiting for pod downwardapi-volume-8ac6ed7f-7c98-4d2b-aaeb-b5b721ed0fd3 to disappear May 25 00:34:41.782: INFO: Pod downwardapi-volume-8ac6ed7f-7c98-4d2b-aaeb-b5b721ed0fd3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:34:41.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9388" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":154,"skipped":2328,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:34:41.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 25 00:34:42.352: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 25 00:34:44.363: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725963682, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725963682, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725963682, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725963682, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 00:34:47.400: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:34:47.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:34:48.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8274" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.836 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":288,"completed":155,"skipped":2434,"failed":0} S ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:34:48.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-4315bd55-1ea0-40d5-8e8e-5c7781931ab9 STEP: Creating configMap with name cm-test-opt-upd-15a3df32-5661-4639-a6d6-3ff9c549db07 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-4315bd55-1ea0-40d5-8e8e-5c7781931ab9 STEP: Updating configmap cm-test-opt-upd-15a3df32-5661-4639-a6d6-3ff9c549db07 STEP: Creating configMap with name cm-test-opt-create-22e8cfef-533b-4cb3-b0dc-ec31c3a787cc STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:34:59.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2610" for this suite. • [SLOW TEST:10.580 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":156,"skipped":2435,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:34:59.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 25 00:34:59.293: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8862' May 25 00:34:59.626: INFO: stderr: "" May 25 00:34:59.626: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 25 00:34:59.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8862' May 25 00:34:59.743: INFO: stderr: "" May 25 00:34:59.743: INFO: stdout: "update-demo-nautilus-8tbdr update-demo-nautilus-l8jvw " May 25 00:34:59.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8tbdr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8862' May 25 00:34:59.881: INFO: stderr: "" May 25 00:34:59.881: INFO: stdout: "" May 25 00:34:59.881: INFO: update-demo-nautilus-8tbdr is created but not running May 25 00:35:04.882: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8862' May 25 00:35:05.005: INFO: stderr: "" May 25 00:35:05.005: INFO: stdout: "update-demo-nautilus-8tbdr update-demo-nautilus-l8jvw " May 25 00:35:05.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8tbdr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8862' May 25 00:35:05.235: INFO: stderr: "" May 25 00:35:05.235: INFO: stdout: "true" May 25 00:35:05.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8tbdr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8862' May 25 00:35:05.392: INFO: stderr: "" May 25 00:35:05.392: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 00:35:05.392: INFO: validating pod update-demo-nautilus-8tbdr May 25 00:35:05.498: INFO: got data: { "image": "nautilus.jpg" } May 25 00:35:05.498: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 00:35:05.498: INFO: update-demo-nautilus-8tbdr is verified up and running May 25 00:35:05.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l8jvw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8862' May 25 00:35:05.751: INFO: stderr: "" May 25 00:35:05.751: INFO: stdout: "true" May 25 00:35:05.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l8jvw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8862' May 25 00:35:05.882: INFO: stderr: "" May 25 00:35:05.883: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 00:35:05.883: INFO: validating pod update-demo-nautilus-l8jvw May 25 00:35:05.888: INFO: got data: { "image": "nautilus.jpg" } May 25 00:35:05.888: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 00:35:05.888: INFO: update-demo-nautilus-l8jvw is verified up and running STEP: using delete to clean up resources May 25 00:35:05.888: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8862' May 25 00:35:06.012: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 00:35:06.012: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 25 00:35:06.013: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8862' May 25 00:35:06.394: INFO: stderr: "No resources found in kubectl-8862 namespace.\n" May 25 00:35:06.394: INFO: stdout: "" May 25 00:35:06.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8862 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 25 00:35:06.550: INFO: stderr: "" May 25 00:35:06.550: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:35:06.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8862" for this suite. • [SLOW TEST:7.347 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":288,"completed":157,"skipped":2447,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:35:06.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-9dcbbe35-4441-4e12-a7ee-59572a66f062 STEP: Creating a pod to test consume configMaps May 25 00:35:07.267: INFO: Waiting up to 5m0s for pod "pod-configmaps-14d65f63-e567-4984-a958-1f0be8f1235e" in namespace "configmap-4217" to be "Succeeded or Failed" May 25 00:35:07.284: INFO: Pod "pod-configmaps-14d65f63-e567-4984-a958-1f0be8f1235e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.481784ms May 25 00:35:09.287: INFO: Pod "pod-configmaps-14d65f63-e567-4984-a958-1f0be8f1235e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019601991s May 25 00:35:11.292: INFO: Pod "pod-configmaps-14d65f63-e567-4984-a958-1f0be8f1235e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024429171s May 25 00:35:13.297: INFO: Pod "pod-configmaps-14d65f63-e567-4984-a958-1f0be8f1235e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029358572s STEP: Saw pod success May 25 00:35:13.297: INFO: Pod "pod-configmaps-14d65f63-e567-4984-a958-1f0be8f1235e" satisfied condition "Succeeded or Failed" May 25 00:35:13.300: INFO: Trying to get logs from node latest-worker pod pod-configmaps-14d65f63-e567-4984-a958-1f0be8f1235e container configmap-volume-test: STEP: delete the pod May 25 00:35:13.349: INFO: Waiting for pod pod-configmaps-14d65f63-e567-4984-a958-1f0be8f1235e to disappear May 25 00:35:13.364: INFO: Pod pod-configmaps-14d65f63-e567-4984-a958-1f0be8f1235e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:35:13.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4217" for this suite. • [SLOW TEST:6.833 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":158,"skipped":2450,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:35:13.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-3687/secret-test-369cace7-f5ab-4eff-8084-53f04c380bea STEP: Creating a pod to test consume secrets May 25 00:35:13.498: INFO: Waiting up to 5m0s for pod "pod-configmaps-3dcc4b8f-b16d-44bb-a5d5-bc2d2d15b8ef" in namespace "secrets-3687" to be "Succeeded or Failed" May 25 00:35:13.514: INFO: Pod "pod-configmaps-3dcc4b8f-b16d-44bb-a5d5-bc2d2d15b8ef": Phase="Pending", Reason="", readiness=false. Elapsed: 16.274355ms May 25 00:35:15.607: INFO: Pod "pod-configmaps-3dcc4b8f-b16d-44bb-a5d5-bc2d2d15b8ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108405448s May 25 00:35:17.611: INFO: Pod "pod-configmaps-3dcc4b8f-b16d-44bb-a5d5-bc2d2d15b8ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.112689628s STEP: Saw pod success May 25 00:35:17.611: INFO: Pod "pod-configmaps-3dcc4b8f-b16d-44bb-a5d5-bc2d2d15b8ef" satisfied condition "Succeeded or Failed" May 25 00:35:17.614: INFO: Trying to get logs from node latest-worker pod pod-configmaps-3dcc4b8f-b16d-44bb-a5d5-bc2d2d15b8ef container env-test: STEP: delete the pod May 25 00:35:17.648: INFO: Waiting for pod pod-configmaps-3dcc4b8f-b16d-44bb-a5d5-bc2d2d15b8ef to disappear May 25 00:35:17.689: INFO: Pod pod-configmaps-3dcc4b8f-b16d-44bb-a5d5-bc2d2d15b8ef no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:35:17.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3687" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":159,"skipped":2456,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:35:17.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 25 00:35:18.023: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a293b0bc-4ac0-4685-a03a-1747b2dc1bb6" in namespace "projected-2146" to be "Succeeded or Failed" May 25 00:35:18.050: INFO: Pod "downwardapi-volume-a293b0bc-4ac0-4685-a03a-1747b2dc1bb6": Phase="Pending", Reason="", readiness=false. Elapsed: 26.343867ms May 25 00:35:20.054: INFO: Pod "downwardapi-volume-a293b0bc-4ac0-4685-a03a-1747b2dc1bb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030445599s May 25 00:35:22.074: INFO: Pod "downwardapi-volume-a293b0bc-4ac0-4685-a03a-1747b2dc1bb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050479012s STEP: Saw pod success May 25 00:35:22.074: INFO: Pod "downwardapi-volume-a293b0bc-4ac0-4685-a03a-1747b2dc1bb6" satisfied condition "Succeeded or Failed" May 25 00:35:22.076: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a293b0bc-4ac0-4685-a03a-1747b2dc1bb6 container client-container: STEP: delete the pod May 25 00:35:22.111: INFO: Waiting for pod downwardapi-volume-a293b0bc-4ac0-4685-a03a-1747b2dc1bb6 to disappear May 25 00:35:22.127: INFO: Pod downwardapi-volume-a293b0bc-4ac0-4685-a03a-1747b2dc1bb6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:35:22.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2146" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":160,"skipped":2466,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:35:22.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 00:35:23.110: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 00:35:25.119: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725963723, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725963723, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725963723, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725963723, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 00:35:28.200: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:35:40.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9481" for this suite. STEP: Destroying namespace "webhook-9481-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.493 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":288,"completed":161,"skipped":2483,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:35:40.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium May 25 00:35:40.704: INFO: Waiting up to 5m0s for pod "pod-60073373-98f8-4ae8-b280-1bb7031ffbb3" in namespace "emptydir-2662" to be "Succeeded or Failed" May 25 00:35:40.731: INFO: Pod "pod-60073373-98f8-4ae8-b280-1bb7031ffbb3": Phase="Pending", Reason="", readiness=false. Elapsed: 27.835528ms May 25 00:35:42.736: INFO: Pod "pod-60073373-98f8-4ae8-b280-1bb7031ffbb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03231443s May 25 00:35:44.744: INFO: Pod "pod-60073373-98f8-4ae8-b280-1bb7031ffbb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040183625s STEP: Saw pod success May 25 00:35:44.744: INFO: Pod "pod-60073373-98f8-4ae8-b280-1bb7031ffbb3" satisfied condition "Succeeded or Failed" May 25 00:35:44.747: INFO: Trying to get logs from node latest-worker2 pod pod-60073373-98f8-4ae8-b280-1bb7031ffbb3 container test-container: STEP: delete the pod May 25 00:35:44.784: INFO: Waiting for pod pod-60073373-98f8-4ae8-b280-1bb7031ffbb3 to disappear May 25 00:35:44.793: INFO: Pod pod-60073373-98f8-4ae8-b280-1bb7031ffbb3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:35:44.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2662" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":162,"skipped":2510,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:35:44.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 25 00:35:44.954: INFO: Pod name pod-release: Found 0 pods out of 1 May 25 00:35:49.959: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:35:50.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2039" for this suite. • [SLOW TEST:5.337 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":288,"completed":163,"skipped":2542,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:35:50.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:35:50.196: INFO: Creating deployment "test-recreate-deployment" May 25 00:35:50.223: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 25 00:35:50.275: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 25 00:35:52.281: INFO: Waiting deployment "test-recreate-deployment" to complete May 25 00:35:52.283: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725963750, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725963750, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725963750, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725963750, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 00:35:54.288: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725963750, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725963750, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725963750, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725963750, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 00:35:56.288: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 25 00:35:56.296: INFO: Updating deployment test-recreate-deployment May 25 00:35:56.296: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 25 00:35:57.445: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3144 /apis/apps/v1/namespaces/deployment-3144/deployments/test-recreate-deployment 66043e94-e1f7-4dc7-b78a-315d8c3203bb 7424651 2 2020-05-25 00:35:50 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-25 00:35:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-25 00:35:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0036f26a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-25 00:35:57 +0000 UTC,LastTransitionTime:2020-05-25 00:35:57 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-25 00:35:57 +0000 UTC,LastTransitionTime:2020-05-25 00:35:50 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 25 00:35:57.448: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-3144 /apis/apps/v1/namespaces/deployment-3144/replicasets/test-recreate-deployment-d5667d9c7 47bae853-96d0-4a46-91a8-ba48a51242f9 7424648 1 2020-05-25 00:35:56 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 66043e94-e1f7-4dc7-b78a-315d8c3203bb 0xc0042db1b0 0xc0042db1b1}] [] [{kube-controller-manager Update apps/v1 2020-05-25 00:35:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"66043e94-e1f7-4dc7-b78a-315d8c3203bb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0042db228 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 00:35:57.448: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 25 00:35:57.448: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-3144 /apis/apps/v1/namespaces/deployment-3144/replicasets/test-recreate-deployment-6d65b9f6d8 f1e77b39-45d3-4f38-9456-540b8b2b6990 7424638 2 2020-05-25 00:35:50 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 66043e94-e1f7-4dc7-b78a-315d8c3203bb 0xc0042db0b7 0xc0042db0b8}] [] [{kube-controller-manager Update apps/v1 2020-05-25 00:35:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"66043e94-e1f7-4dc7-b78a-315d8c3203bb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0042db148 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 00:35:57.457: INFO: Pod "test-recreate-deployment-d5667d9c7-w9nh6" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-w9nh6 test-recreate-deployment-d5667d9c7- deployment-3144 /api/v1/namespaces/deployment-3144/pods/test-recreate-deployment-d5667d9c7-w9nh6 61026af5-2337-429e-9cc8-8bca58365e2f 7424652 0 2020-05-25 00:35:56 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 47bae853-96d0-4a46-91a8-ba48a51242f9 0xc0036f2ab0 0xc0036f2ab1}] [] [{kube-controller-manager Update v1 2020-05-25 00:35:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47bae853-96d0-4a46-91a8-ba48a51242f9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 00:35:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hqn8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hqn8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hqn8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 00:35:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 00:35:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 00:35:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 00:35:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-25 00:35:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:35:57.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3144" for this suite. • [SLOW TEST:7.324 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":164,"skipped":2558,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:35:57.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3614.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3614.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3614.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3614.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3614.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3614.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3614.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3614.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 00:36:03.939: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:03.942: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:03.946: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:03.949: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:03.958: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:03.960: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:03.963: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:03.966: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:03.971: INFO: Lookups using dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3614.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3614.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local jessie_udp@dns-test-service-2.dns-3614.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3614.svc.cluster.local] May 25 00:36:08.976: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:08.979: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:08.982: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:08.985: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:08.995: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:08.998: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:09.002: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:09.004: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:09.010: INFO: Lookups using dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3614.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3614.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local jessie_udp@dns-test-service-2.dns-3614.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3614.svc.cluster.local] May 25 00:36:13.978: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:13.981: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:13.992: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:13.995: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:14.004: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:14.007: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:14.010: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:14.013: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:14.020: INFO: Lookups using dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3614.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3614.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local jessie_udp@dns-test-service-2.dns-3614.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3614.svc.cluster.local] May 25 00:36:18.976: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:18.979: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:18.982: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:18.985: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:18.993: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:18.996: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:18.999: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:19.003: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:19.008: INFO: Lookups using dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3614.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3614.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local jessie_udp@dns-test-service-2.dns-3614.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3614.svc.cluster.local] May 25 00:36:23.984: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:23.987: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:23.991: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:23.996: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:24.004: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:24.006: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:24.008: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:24.010: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:24.015: INFO: Lookups using dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3614.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3614.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local jessie_udp@dns-test-service-2.dns-3614.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3614.svc.cluster.local] May 25 00:36:28.977: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:28.982: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:28.985: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:28.988: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:28.997: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:29.000: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:29.003: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:29.006: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3614.svc.cluster.local from pod dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f: the server could not find the requested resource (get pods dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f) May 25 00:36:29.012: INFO: Lookups using dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3614.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3614.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3614.svc.cluster.local jessie_udp@dns-test-service-2.dns-3614.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3614.svc.cluster.local] May 25 00:36:34.014: INFO: DNS probes using dns-3614/dns-test-9752858f-1230-4503-9d07-afcd7e7fa17f succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:36:34.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3614" for this suite. • [SLOW TEST:37.372 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":288,"completed":165,"skipped":2580,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:36:34.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-9201cc91-0508-4fa3-be8d-846b91529ffe in namespace container-probe-2719 May 25 00:36:38.991: INFO: Started pod liveness-9201cc91-0508-4fa3-be8d-846b91529ffe in namespace container-probe-2719 STEP: checking the pod's current state and verifying that restartCount is present May 25 00:36:38.994: INFO: Initial restart count of pod liveness-9201cc91-0508-4fa3-be8d-846b91529ffe is 0 May 25 00:36:57.120: INFO: Restart count of pod container-probe-2719/liveness-9201cc91-0508-4fa3-be8d-846b91529ffe is now 1 (18.125889005s elapsed) May 25 00:37:17.168: INFO: Restart count of pod container-probe-2719/liveness-9201cc91-0508-4fa3-be8d-846b91529ffe is now 2 (38.173988061s elapsed) May 25 00:37:37.220: INFO: Restart count of pod container-probe-2719/liveness-9201cc91-0508-4fa3-be8d-846b91529ffe is now 3 (58.225286976s elapsed) May 25 00:37:57.261: INFO: Restart count of pod container-probe-2719/liveness-9201cc91-0508-4fa3-be8d-846b91529ffe is now 4 (1m18.267098226s elapsed) May 25 00:38:59.446: INFO: Restart count of pod container-probe-2719/liveness-9201cc91-0508-4fa3-be8d-846b91529ffe is now 5 (2m20.451462909s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:38:59.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2719" for this suite. • [SLOW TEST:144.678 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":288,"completed":166,"skipped":2591,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:38:59.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8683 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8683;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8683 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8683;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8683.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8683.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8683.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8683.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8683.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8683.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8683.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8683.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8683.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8683.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8683.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8683.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8683.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 247.153.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.153.247_udp@PTR;check="$$(dig +tcp +noall +answer +search 247.153.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.153.247_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8683 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8683;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8683 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8683;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8683.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8683.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8683.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8683.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8683.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8683.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8683.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8683.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8683.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8683.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8683.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8683.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8683.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 247.153.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.153.247_udp@PTR;check="$$(dig +tcp +noall +answer +search 247.153.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.153.247_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 00:39:06.098: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:06.100: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:06.104: INFO: Unable to read wheezy_udp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:06.106: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:06.109: INFO: Unable to read wheezy_udp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:06.112: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:06.114: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:06.117: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:06.134: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:06.136: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:06.140: INFO: Unable to read jessie_udp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:06.142: INFO: Unable to read jessie_tcp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:06.145: INFO: Unable to read jessie_udp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:06.148: INFO: Unable to read jessie_tcp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:06.152: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:06.154: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:06.172: INFO: Lookups using dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8683 wheezy_tcp@dns-test-service.dns-8683 wheezy_udp@dns-test-service.dns-8683.svc wheezy_tcp@dns-test-service.dns-8683.svc wheezy_udp@_http._tcp.dns-test-service.dns-8683.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8683.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8683 jessie_tcp@dns-test-service.dns-8683 jessie_udp@dns-test-service.dns-8683.svc jessie_tcp@dns-test-service.dns-8683.svc jessie_udp@_http._tcp.dns-test-service.dns-8683.svc jessie_tcp@_http._tcp.dns-test-service.dns-8683.svc] May 25 00:39:11.177: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:11.181: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:11.185: INFO: Unable to read wheezy_udp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:11.188: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:11.191: INFO: Unable to read wheezy_udp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:11.194: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:11.198: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:11.200: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:11.222: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:11.224: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:11.227: INFO: Unable to read jessie_udp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:11.231: INFO: Unable to read jessie_tcp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:11.234: INFO: Unable to read jessie_udp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:11.236: INFO: Unable to read jessie_tcp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:11.239: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:11.241: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:11.257: INFO: Lookups using dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8683 wheezy_tcp@dns-test-service.dns-8683 wheezy_udp@dns-test-service.dns-8683.svc wheezy_tcp@dns-test-service.dns-8683.svc wheezy_udp@_http._tcp.dns-test-service.dns-8683.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8683.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8683 jessie_tcp@dns-test-service.dns-8683 jessie_udp@dns-test-service.dns-8683.svc jessie_tcp@dns-test-service.dns-8683.svc jessie_udp@_http._tcp.dns-test-service.dns-8683.svc jessie_tcp@_http._tcp.dns-test-service.dns-8683.svc] May 25 00:39:16.178: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:16.183: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:16.186: INFO: Unable to read wheezy_udp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:16.189: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:16.192: INFO: Unable to read wheezy_udp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:16.195: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:16.198: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:16.201: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:16.225: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:16.228: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:16.232: INFO: Unable to read jessie_udp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:16.235: INFO: Unable to read jessie_tcp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:16.238: INFO: Unable to read jessie_udp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:16.241: INFO: Unable to read jessie_tcp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:16.244: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:16.247: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:16.266: INFO: Lookups using dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8683 wheezy_tcp@dns-test-service.dns-8683 wheezy_udp@dns-test-service.dns-8683.svc wheezy_tcp@dns-test-service.dns-8683.svc wheezy_udp@_http._tcp.dns-test-service.dns-8683.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8683.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8683 jessie_tcp@dns-test-service.dns-8683 jessie_udp@dns-test-service.dns-8683.svc jessie_tcp@dns-test-service.dns-8683.svc jessie_udp@_http._tcp.dns-test-service.dns-8683.svc jessie_tcp@_http._tcp.dns-test-service.dns-8683.svc] May 25 00:39:21.176: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:21.179: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:21.181: INFO: Unable to read wheezy_udp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:21.189: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:21.192: INFO: Unable to read wheezy_udp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:21.195: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:21.197: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:21.199: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:21.219: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:21.223: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:21.226: INFO: Unable to read jessie_udp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:21.229: INFO: Unable to read jessie_tcp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:21.232: INFO: Unable to read jessie_udp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:21.235: INFO: Unable to read jessie_tcp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:21.239: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:21.242: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:21.267: INFO: Lookups using dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8683 wheezy_tcp@dns-test-service.dns-8683 wheezy_udp@dns-test-service.dns-8683.svc wheezy_tcp@dns-test-service.dns-8683.svc wheezy_udp@_http._tcp.dns-test-service.dns-8683.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8683.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8683 jessie_tcp@dns-test-service.dns-8683 jessie_udp@dns-test-service.dns-8683.svc jessie_tcp@dns-test-service.dns-8683.svc jessie_udp@_http._tcp.dns-test-service.dns-8683.svc jessie_tcp@_http._tcp.dns-test-service.dns-8683.svc] May 25 00:39:26.178: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:26.182: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:26.187: INFO: Unable to read wheezy_udp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:26.191: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:26.194: INFO: Unable to read wheezy_udp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:26.197: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:26.199: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:26.202: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:26.220: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:26.223: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:26.226: INFO: Unable to read jessie_udp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:26.229: INFO: Unable to read jessie_tcp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:26.232: INFO: Unable to read jessie_udp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:26.235: INFO: Unable to read jessie_tcp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:26.238: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:26.240: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:26.257: INFO: Lookups using dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8683 wheezy_tcp@dns-test-service.dns-8683 wheezy_udp@dns-test-service.dns-8683.svc wheezy_tcp@dns-test-service.dns-8683.svc wheezy_udp@_http._tcp.dns-test-service.dns-8683.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8683.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8683 jessie_tcp@dns-test-service.dns-8683 jessie_udp@dns-test-service.dns-8683.svc jessie_tcp@dns-test-service.dns-8683.svc jessie_udp@_http._tcp.dns-test-service.dns-8683.svc jessie_tcp@_http._tcp.dns-test-service.dns-8683.svc] May 25 00:39:31.179: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:31.183: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:31.193: INFO: Unable to read wheezy_udp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:31.196: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:31.199: INFO: Unable to read wheezy_udp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:31.201: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:31.204: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:31.206: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:31.224: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:31.227: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:31.229: INFO: Unable to read jessie_udp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:31.231: INFO: Unable to read jessie_tcp@dns-test-service.dns-8683 from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:31.234: INFO: Unable to read jessie_udp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:31.236: INFO: Unable to read jessie_tcp@dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:31.239: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:31.242: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8683.svc from pod dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25: the server could not find the requested resource (get pods dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25) May 25 00:39:31.260: INFO: Lookups using dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8683 wheezy_tcp@dns-test-service.dns-8683 wheezy_udp@dns-test-service.dns-8683.svc wheezy_tcp@dns-test-service.dns-8683.svc wheezy_udp@_http._tcp.dns-test-service.dns-8683.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8683.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8683 jessie_tcp@dns-test-service.dns-8683 jessie_udp@dns-test-service.dns-8683.svc jessie_tcp@dns-test-service.dns-8683.svc jessie_udp@_http._tcp.dns-test-service.dns-8683.svc jessie_tcp@_http._tcp.dns-test-service.dns-8683.svc] May 25 00:39:36.266: INFO: DNS probes using dns-8683/dns-test-27371230-df2e-494e-bbc5-d631a3ab5d25 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:39:37.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8683" for this suite. • [SLOW TEST:37.674 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":288,"completed":167,"skipped":2607,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:39:37.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:39:37.298: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-94cbd938-6642-4f11-8ac2-a7f1df4f5bc1" in namespace "security-context-test-8090" to be "Succeeded or Failed" May 25 00:39:37.308: INFO: Pod "alpine-nnp-false-94cbd938-6642-4f11-8ac2-a7f1df4f5bc1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.967796ms May 25 00:39:39.393: INFO: Pod "alpine-nnp-false-94cbd938-6642-4f11-8ac2-a7f1df4f5bc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095048641s May 25 00:39:41.397: INFO: Pod "alpine-nnp-false-94cbd938-6642-4f11-8ac2-a7f1df4f5bc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099202553s May 25 00:39:41.397: INFO: Pod "alpine-nnp-false-94cbd938-6642-4f11-8ac2-a7f1df4f5bc1" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:39:41.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8090" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":168,"skipped":2630,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:39:41.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 25 00:39:41.616: INFO: Waiting up to 5m0s for pod "downward-api-b96ec36a-2d21-4a91-bc31-77017188fd34" in namespace "downward-api-1331" to be "Succeeded or Failed" May 25 00:39:41.619: INFO: Pod "downward-api-b96ec36a-2d21-4a91-bc31-77017188fd34": Phase="Pending", Reason="", readiness=false. Elapsed: 3.174844ms May 25 00:39:43.623: INFO: Pod "downward-api-b96ec36a-2d21-4a91-bc31-77017188fd34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007109286s May 25 00:39:45.627: INFO: Pod "downward-api-b96ec36a-2d21-4a91-bc31-77017188fd34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010579766s STEP: Saw pod success May 25 00:39:45.627: INFO: Pod "downward-api-b96ec36a-2d21-4a91-bc31-77017188fd34" satisfied condition "Succeeded or Failed" May 25 00:39:45.630: INFO: Trying to get logs from node latest-worker2 pod downward-api-b96ec36a-2d21-4a91-bc31-77017188fd34 container dapi-container: STEP: delete the pod May 25 00:39:45.676: INFO: Waiting for pod downward-api-b96ec36a-2d21-4a91-bc31-77017188fd34 to disappear May 25 00:39:45.692: INFO: Pod downward-api-b96ec36a-2d21-4a91-bc31-77017188fd34 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:39:45.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1331" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":288,"completed":169,"skipped":2675,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:39:45.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args May 25 00:39:45.836: INFO: Waiting up to 5m0s for pod "var-expansion-c0e6fb91-1fcc-455a-87ae-49cf039b3f77" in namespace "var-expansion-616" to be "Succeeded or Failed" May 25 00:39:45.840: INFO: Pod "var-expansion-c0e6fb91-1fcc-455a-87ae-49cf039b3f77": Phase="Pending", Reason="", readiness=false. Elapsed: 3.578759ms May 25 00:39:47.844: INFO: Pod "var-expansion-c0e6fb91-1fcc-455a-87ae-49cf039b3f77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007231116s May 25 00:39:49.883: INFO: Pod "var-expansion-c0e6fb91-1fcc-455a-87ae-49cf039b3f77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04714177s STEP: Saw pod success May 25 00:39:49.883: INFO: Pod "var-expansion-c0e6fb91-1fcc-455a-87ae-49cf039b3f77" satisfied condition "Succeeded or Failed" May 25 00:39:49.886: INFO: Trying to get logs from node latest-worker pod var-expansion-c0e6fb91-1fcc-455a-87ae-49cf039b3f77 container dapi-container: STEP: delete the pod May 25 00:39:49.908: INFO: Waiting for pod var-expansion-c0e6fb91-1fcc-455a-87ae-49cf039b3f77 to disappear May 25 00:39:49.912: INFO: Pod var-expansion-c0e6fb91-1fcc-455a-87ae-49cf039b3f77 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:39:49.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-616" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":288,"completed":170,"skipped":2678,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:39:49.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-87b9d563-55b7-4bb1-83ab-3cc2d7f0b54d STEP: Creating a pod to test consume configMaps May 25 00:39:50.047: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8f8a6bf3-be93-41b5-a55b-eaa46fa22945" in namespace "projected-5644" to be "Succeeded or Failed" May 25 00:39:50.051: INFO: Pod "pod-projected-configmaps-8f8a6bf3-be93-41b5-a55b-eaa46fa22945": Phase="Pending", Reason="", readiness=false. Elapsed: 3.948126ms May 25 00:39:52.058: INFO: Pod "pod-projected-configmaps-8f8a6bf3-be93-41b5-a55b-eaa46fa22945": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010790753s May 25 00:39:54.123: INFO: Pod "pod-projected-configmaps-8f8a6bf3-be93-41b5-a55b-eaa46fa22945": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07643866s STEP: Saw pod success May 25 00:39:54.123: INFO: Pod "pod-projected-configmaps-8f8a6bf3-be93-41b5-a55b-eaa46fa22945" satisfied condition "Succeeded or Failed" May 25 00:39:54.126: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-8f8a6bf3-be93-41b5-a55b-eaa46fa22945 container projected-configmap-volume-test: STEP: delete the pod May 25 00:39:54.210: INFO: Waiting for pod pod-projected-configmaps-8f8a6bf3-be93-41b5-a55b-eaa46fa22945 to disappear May 25 00:39:54.254: INFO: Pod pod-projected-configmaps-8f8a6bf3-be93-41b5-a55b-eaa46fa22945 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:39:54.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5644" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":171,"skipped":2686,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:39:54.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:39:54.407: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-c4ded9f7-5b82-487a-97ad-64f34de9af0a" in namespace "security-context-test-524" to be "Succeeded or Failed" May 25 00:39:54.410: INFO: Pod "busybox-readonly-false-c4ded9f7-5b82-487a-97ad-64f34de9af0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.896331ms May 25 00:39:56.415: INFO: Pod "busybox-readonly-false-c4ded9f7-5b82-487a-97ad-64f34de9af0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007446587s May 25 00:39:58.419: INFO: Pod "busybox-readonly-false-c4ded9f7-5b82-487a-97ad-64f34de9af0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011754744s May 25 00:39:58.419: INFO: Pod "busybox-readonly-false-c4ded9f7-5b82-487a-97ad-64f34de9af0a" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:39:58.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-524" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":288,"completed":172,"skipped":2698,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:39:58.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 00:39:59.083: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 00:40:01.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725963999, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725963999, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725963999, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725963999, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 00:40:04.135: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:40:14.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9091" for this suite. STEP: Destroying namespace "webhook-9091-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.998 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":288,"completed":173,"skipped":2706,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:40:14.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-461f6d29-a536-4725-91a1-c3b40819a542 STEP: Creating a pod to test consume secrets May 25 00:40:14.552: INFO: Waiting up to 5m0s for pod "pod-secrets-24c3c0aa-4c15-4179-ba04-97be9e552518" in namespace "secrets-7717" to be "Succeeded or Failed" May 25 00:40:14.608: INFO: Pod "pod-secrets-24c3c0aa-4c15-4179-ba04-97be9e552518": Phase="Pending", Reason="", readiness=false. Elapsed: 56.176284ms May 25 00:40:16.612: INFO: Pod "pod-secrets-24c3c0aa-4c15-4179-ba04-97be9e552518": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060239981s May 25 00:40:18.616: INFO: Pod "pod-secrets-24c3c0aa-4c15-4179-ba04-97be9e552518": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064077023s STEP: Saw pod success May 25 00:40:18.616: INFO: Pod "pod-secrets-24c3c0aa-4c15-4179-ba04-97be9e552518" satisfied condition "Succeeded or Failed" May 25 00:40:18.619: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-24c3c0aa-4c15-4179-ba04-97be9e552518 container secret-volume-test: STEP: delete the pod May 25 00:40:18.640: INFO: Waiting for pod pod-secrets-24c3c0aa-4c15-4179-ba04-97be9e552518 to disappear May 25 00:40:18.656: INFO: Pod pod-secrets-24c3c0aa-4c15-4179-ba04-97be9e552518 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:40:18.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7717" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":174,"skipped":2723,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:40:18.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all May 25 00:40:18.779: INFO: Waiting up to 5m0s for pod "client-containers-6d16559d-8ba7-4593-9327-168e454e22c3" in namespace "containers-3370" to be "Succeeded or Failed" May 25 00:40:18.782: INFO: Pod "client-containers-6d16559d-8ba7-4593-9327-168e454e22c3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.293623ms May 25 00:40:20.786: INFO: Pod "client-containers-6d16559d-8ba7-4593-9327-168e454e22c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007339168s May 25 00:40:22.791: INFO: Pod "client-containers-6d16559d-8ba7-4593-9327-168e454e22c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011556776s STEP: Saw pod success May 25 00:40:22.791: INFO: Pod "client-containers-6d16559d-8ba7-4593-9327-168e454e22c3" satisfied condition "Succeeded or Failed" May 25 00:40:22.794: INFO: Trying to get logs from node latest-worker pod client-containers-6d16559d-8ba7-4593-9327-168e454e22c3 container test-container: STEP: delete the pod May 25 00:40:22.814: INFO: Waiting for pod client-containers-6d16559d-8ba7-4593-9327-168e454e22c3 to disappear May 25 00:40:22.833: INFO: Pod client-containers-6d16559d-8ba7-4593-9327-168e454e22c3 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:40:22.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3370" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":288,"completed":175,"skipped":2740,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:40:22.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 25 00:40:22.927: INFO: Waiting up to 5m0s for pod "downward-api-d69abb73-ebb1-4598-86fc-dc6507b396a2" in namespace "downward-api-5157" to be "Succeeded or Failed" May 25 00:40:22.929: INFO: Pod "downward-api-d69abb73-ebb1-4598-86fc-dc6507b396a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.44228ms May 25 00:40:25.004: INFO: Pod "downward-api-d69abb73-ebb1-4598-86fc-dc6507b396a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077070788s May 25 00:40:27.008: INFO: Pod "downward-api-d69abb73-ebb1-4598-86fc-dc6507b396a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081785627s STEP: Saw pod success May 25 00:40:27.008: INFO: Pod "downward-api-d69abb73-ebb1-4598-86fc-dc6507b396a2" satisfied condition "Succeeded or Failed" May 25 00:40:27.012: INFO: Trying to get logs from node latest-worker2 pod downward-api-d69abb73-ebb1-4598-86fc-dc6507b396a2 container dapi-container: STEP: delete the pod May 25 00:40:27.071: INFO: Waiting for pod downward-api-d69abb73-ebb1-4598-86fc-dc6507b396a2 to disappear May 25 00:40:27.214: INFO: Pod downward-api-d69abb73-ebb1-4598-86fc-dc6507b396a2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:40:27.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5157" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":288,"completed":176,"skipped":2765,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:40:27.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath May 25 00:40:27.398: INFO: Waiting up to 5m0s for pod "var-expansion-eaddcc84-5d6a-4089-b20f-c26e98434d72" in namespace "var-expansion-7511" to be "Succeeded or Failed" May 25 00:40:27.415: INFO: Pod "var-expansion-eaddcc84-5d6a-4089-b20f-c26e98434d72": Phase="Pending", Reason="", readiness=false. Elapsed: 17.289193ms May 25 00:40:29.531: INFO: Pod "var-expansion-eaddcc84-5d6a-4089-b20f-c26e98434d72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133371377s May 25 00:40:31.536: INFO: Pod "var-expansion-eaddcc84-5d6a-4089-b20f-c26e98434d72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.138394259s STEP: Saw pod success May 25 00:40:31.536: INFO: Pod "var-expansion-eaddcc84-5d6a-4089-b20f-c26e98434d72" satisfied condition "Succeeded or Failed" May 25 00:40:31.539: INFO: Trying to get logs from node latest-worker2 pod var-expansion-eaddcc84-5d6a-4089-b20f-c26e98434d72 container dapi-container: STEP: delete the pod May 25 00:40:31.592: INFO: Waiting for pod var-expansion-eaddcc84-5d6a-4089-b20f-c26e98434d72 to disappear May 25 00:40:31.599: INFO: Pod var-expansion-eaddcc84-5d6a-4089-b20f-c26e98434d72 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:40:31.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7511" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":288,"completed":177,"skipped":2767,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:40:31.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 25 00:40:31.779: INFO: Waiting up to 5m0s for pod "downwardapi-volume-63015642-f34f-4211-a4db-70caf7952ef5" in namespace "downward-api-5276" to be "Succeeded or Failed" May 25 00:40:31.790: INFO: Pod "downwardapi-volume-63015642-f34f-4211-a4db-70caf7952ef5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.525822ms May 25 00:40:33.854: INFO: Pod "downwardapi-volume-63015642-f34f-4211-a4db-70caf7952ef5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075388555s May 25 00:40:35.858: INFO: Pod "downwardapi-volume-63015642-f34f-4211-a4db-70caf7952ef5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079638231s STEP: Saw pod success May 25 00:40:35.858: INFO: Pod "downwardapi-volume-63015642-f34f-4211-a4db-70caf7952ef5" satisfied condition "Succeeded or Failed" May 25 00:40:35.861: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-63015642-f34f-4211-a4db-70caf7952ef5 container client-container: STEP: delete the pod May 25 00:40:35.897: INFO: Waiting for pod downwardapi-volume-63015642-f34f-4211-a4db-70caf7952ef5 to disappear May 25 00:40:35.909: INFO: Pod downwardapi-volume-63015642-f34f-4211-a4db-70caf7952ef5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:40:35.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5276" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":178,"skipped":2773,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:40:35.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 00:40:36.853: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 00:40:38.864: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725964036, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725964036, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725964036, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725964036, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 00:40:41.904: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:40:41.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:40:43.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3786" for this suite. STEP: Destroying namespace "webhook-3786-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.353 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":288,"completed":179,"skipped":2817,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:40:43.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 25 00:40:43.372: INFO: Waiting up to 5m0s for pod "pod-61d2d554-22af-4d7f-8cd1-a23eefdb8ad1" in namespace "emptydir-9694" to be "Succeeded or Failed" May 25 00:40:43.432: INFO: Pod "pod-61d2d554-22af-4d7f-8cd1-a23eefdb8ad1": Phase="Pending", Reason="", readiness=false. Elapsed: 59.403443ms May 25 00:40:45.436: INFO: Pod "pod-61d2d554-22af-4d7f-8cd1-a23eefdb8ad1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064072459s May 25 00:40:47.440: INFO: Pod "pod-61d2d554-22af-4d7f-8cd1-a23eefdb8ad1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067590456s STEP: Saw pod success May 25 00:40:47.440: INFO: Pod "pod-61d2d554-22af-4d7f-8cd1-a23eefdb8ad1" satisfied condition "Succeeded or Failed" May 25 00:40:47.443: INFO: Trying to get logs from node latest-worker2 pod pod-61d2d554-22af-4d7f-8cd1-a23eefdb8ad1 container test-container: STEP: delete the pod May 25 00:40:47.476: INFO: Waiting for pod pod-61d2d554-22af-4d7f-8cd1-a23eefdb8ad1 to disappear May 25 00:40:47.485: INFO: Pod pod-61d2d554-22af-4d7f-8cd1-a23eefdb8ad1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:40:47.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9694" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":180,"skipped":2845,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:40:47.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 25 00:40:47.552: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:40:52.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5878" for this suite. • [SLOW TEST:5.536 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":288,"completed":181,"skipped":2858,"failed":0} SSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:40:53.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 25 00:40:57.625: INFO: Successfully updated pod "adopt-release-lm6xh" STEP: Checking that the Job readopts the Pod May 25 00:40:57.625: INFO: Waiting up to 15m0s for pod "adopt-release-lm6xh" in namespace "job-9314" to be "adopted" May 25 00:40:57.651: INFO: Pod "adopt-release-lm6xh": Phase="Running", Reason="", readiness=true. Elapsed: 25.675479ms May 25 00:40:59.655: INFO: Pod "adopt-release-lm6xh": Phase="Running", Reason="", readiness=true. Elapsed: 2.030142604s May 25 00:40:59.655: INFO: Pod "adopt-release-lm6xh" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 25 00:41:00.170: INFO: Successfully updated pod "adopt-release-lm6xh" STEP: Checking that the Job releases the Pod May 25 00:41:00.171: INFO: Waiting up to 15m0s for pod "adopt-release-lm6xh" in namespace "job-9314" to be "released" May 25 00:41:00.191: INFO: Pod "adopt-release-lm6xh": Phase="Running", Reason="", readiness=true. Elapsed: 20.141286ms May 25 00:41:02.248: INFO: Pod "adopt-release-lm6xh": Phase="Running", Reason="", readiness=true. Elapsed: 2.077981136s May 25 00:41:02.249: INFO: Pod "adopt-release-lm6xh" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:41:02.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9314" for this suite. • [SLOW TEST:9.230 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":288,"completed":182,"skipped":2861,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:41:02.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 25 00:41:02.537: INFO: Waiting up to 5m0s for pod "downward-api-bdb5b2ae-b482-4baa-87d0-977be5b1d3bd" in namespace "downward-api-4143" to be "Succeeded or Failed" May 25 00:41:02.645: INFO: Pod "downward-api-bdb5b2ae-b482-4baa-87d0-977be5b1d3bd": Phase="Pending", Reason="", readiness=false. Elapsed: 107.882796ms May 25 00:41:04.649: INFO: Pod "downward-api-bdb5b2ae-b482-4baa-87d0-977be5b1d3bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11168068s May 25 00:41:06.654: INFO: Pod "downward-api-bdb5b2ae-b482-4baa-87d0-977be5b1d3bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116404701s STEP: Saw pod success May 25 00:41:06.654: INFO: Pod "downward-api-bdb5b2ae-b482-4baa-87d0-977be5b1d3bd" satisfied condition "Succeeded or Failed" May 25 00:41:06.657: INFO: Trying to get logs from node latest-worker2 pod downward-api-bdb5b2ae-b482-4baa-87d0-977be5b1d3bd container dapi-container: STEP: delete the pod May 25 00:41:06.767: INFO: Waiting for pod downward-api-bdb5b2ae-b482-4baa-87d0-977be5b1d3bd to disappear May 25 00:41:06.815: INFO: Pod downward-api-bdb5b2ae-b482-4baa-87d0-977be5b1d3bd no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:41:06.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4143" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":288,"completed":183,"skipped":2866,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:41:06.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 25 00:41:07.123: INFO: Waiting up to 5m0s for pod "pod-2cfb2066-17d3-4958-8026-5def03afcb40" in namespace "emptydir-1455" to be "Succeeded or Failed" May 25 00:41:07.157: INFO: Pod "pod-2cfb2066-17d3-4958-8026-5def03afcb40": Phase="Pending", Reason="", readiness=false. Elapsed: 34.909551ms May 25 00:41:09.217: INFO: Pod "pod-2cfb2066-17d3-4958-8026-5def03afcb40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094857721s May 25 00:41:11.229: INFO: Pod "pod-2cfb2066-17d3-4958-8026-5def03afcb40": Phase="Running", Reason="", readiness=true. Elapsed: 4.106755539s May 25 00:41:13.234: INFO: Pod "pod-2cfb2066-17d3-4958-8026-5def03afcb40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.110948328s STEP: Saw pod success May 25 00:41:13.234: INFO: Pod "pod-2cfb2066-17d3-4958-8026-5def03afcb40" satisfied condition "Succeeded or Failed" May 25 00:41:13.236: INFO: Trying to get logs from node latest-worker2 pod pod-2cfb2066-17d3-4958-8026-5def03afcb40 container test-container: STEP: delete the pod May 25 00:41:13.339: INFO: Waiting for pod pod-2cfb2066-17d3-4958-8026-5def03afcb40 to disappear May 25 00:41:13.343: INFO: Pod pod-2cfb2066-17d3-4958-8026-5def03afcb40 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:41:13.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1455" for this suite. • [SLOW TEST:6.527 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":184,"skipped":2874,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:41:13.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 25 00:41:13.480: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9377 /api/v1/namespaces/watch-9377/configmaps/e2e-watch-test-label-changed a4e7cef6-04f6-41fe-b93e-13a11533f251 7426306 0 2020-05-25 00:41:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-25 00:41:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 25 00:41:13.480: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9377 /api/v1/namespaces/watch-9377/configmaps/e2e-watch-test-label-changed a4e7cef6-04f6-41fe-b93e-13a11533f251 7426307 0 2020-05-25 00:41:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-25 00:41:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 25 00:41:13.481: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9377 /api/v1/namespaces/watch-9377/configmaps/e2e-watch-test-label-changed a4e7cef6-04f6-41fe-b93e-13a11533f251 7426309 0 2020-05-25 00:41:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-25 00:41:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 25 00:41:23.583: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9377 /api/v1/namespaces/watch-9377/configmaps/e2e-watch-test-label-changed a4e7cef6-04f6-41fe-b93e-13a11533f251 7426347 0 2020-05-25 00:41:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-25 00:41:23 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 25 00:41:23.584: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9377 /api/v1/namespaces/watch-9377/configmaps/e2e-watch-test-label-changed a4e7cef6-04f6-41fe-b93e-13a11533f251 7426348 0 2020-05-25 00:41:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-25 00:41:23 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 25 00:41:23.584: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9377 /api/v1/namespaces/watch-9377/configmaps/e2e-watch-test-label-changed a4e7cef6-04f6-41fe-b93e-13a11533f251 7426349 0 2020-05-25 00:41:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-25 00:41:23 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:41:23.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9377" for this suite. • [SLOW TEST:10.260 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":288,"completed":185,"skipped":2890,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:41:23.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 25 00:41:27.897: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:41:27.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9255" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":186,"skipped":2900,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:41:27.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 25 00:41:28.038: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4053' May 25 00:41:31.195: INFO: stderr: "" May 25 00:41:31.195: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 25 00:41:31.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4053' May 25 00:41:31.328: INFO: stderr: "" May 25 00:41:31.328: INFO: stdout: "update-demo-nautilus-27kw5 update-demo-nautilus-9bqqb " May 25 00:41:31.328: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27kw5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4053' May 25 00:41:31.458: INFO: stderr: "" May 25 00:41:31.458: INFO: stdout: "" May 25 00:41:31.458: INFO: update-demo-nautilus-27kw5 is created but not running May 25 00:41:36.458: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4053' May 25 00:41:36.570: INFO: stderr: "" May 25 00:41:36.570: INFO: stdout: "update-demo-nautilus-27kw5 update-demo-nautilus-9bqqb " May 25 00:41:36.570: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27kw5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4053' May 25 00:41:36.676: INFO: stderr: "" May 25 00:41:36.676: INFO: stdout: "true" May 25 00:41:36.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27kw5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4053' May 25 00:41:36.771: INFO: stderr: "" May 25 00:41:36.771: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 00:41:36.771: INFO: validating pod update-demo-nautilus-27kw5 May 25 00:41:36.775: INFO: got data: { "image": "nautilus.jpg" } May 25 00:41:36.775: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 00:41:36.775: INFO: update-demo-nautilus-27kw5 is verified up and running May 25 00:41:36.775: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9bqqb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4053' May 25 00:41:36.871: INFO: stderr: "" May 25 00:41:36.871: INFO: stdout: "true" May 25 00:41:36.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9bqqb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4053' May 25 00:41:36.969: INFO: stderr: "" May 25 00:41:36.969: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 00:41:36.969: INFO: validating pod update-demo-nautilus-9bqqb May 25 00:41:36.980: INFO: got data: { "image": "nautilus.jpg" } May 25 00:41:36.980: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 00:41:36.980: INFO: update-demo-nautilus-9bqqb is verified up and running STEP: scaling down the replication controller May 25 00:41:36.982: INFO: scanned /root for discovery docs: May 25 00:41:36.982: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4053' May 25 00:41:38.171: INFO: stderr: "" May 25 00:41:38.171: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 25 00:41:38.171: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4053' May 25 00:41:38.273: INFO: stderr: "" May 25 00:41:38.273: INFO: stdout: "update-demo-nautilus-27kw5 update-demo-nautilus-9bqqb " STEP: Replicas for name=update-demo: expected=1 actual=2 May 25 00:41:43.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4053' May 25 00:41:43.391: INFO: stderr: "" May 25 00:41:43.391: INFO: stdout: "update-demo-nautilus-27kw5 update-demo-nautilus-9bqqb " STEP: Replicas for name=update-demo: expected=1 actual=2 May 25 00:41:48.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4053' May 25 00:41:48.501: INFO: stderr: "" May 25 00:41:48.501: INFO: stdout: "update-demo-nautilus-27kw5 " May 25 00:41:48.501: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27kw5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4053' May 25 00:41:48.606: INFO: stderr: "" May 25 00:41:48.606: INFO: stdout: "true" May 25 00:41:48.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27kw5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4053' May 25 00:41:48.709: INFO: stderr: "" May 25 00:41:48.709: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 00:41:48.709: INFO: validating pod update-demo-nautilus-27kw5 May 25 00:41:48.712: INFO: got data: { "image": "nautilus.jpg" } May 25 00:41:48.712: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 00:41:48.712: INFO: update-demo-nautilus-27kw5 is verified up and running STEP: scaling up the replication controller May 25 00:41:48.715: INFO: scanned /root for discovery docs: May 25 00:41:48.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4053' May 25 00:41:49.862: INFO: stderr: "" May 25 00:41:49.863: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 25 00:41:49.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4053' May 25 00:41:49.983: INFO: stderr: "" May 25 00:41:49.983: INFO: stdout: "update-demo-nautilus-27kw5 update-demo-nautilus-6vcrl " May 25 00:41:49.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27kw5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4053' May 25 00:41:50.075: INFO: stderr: "" May 25 00:41:50.075: INFO: stdout: "true" May 25 00:41:50.075: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27kw5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4053' May 25 00:41:50.166: INFO: stderr: "" May 25 00:41:50.166: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 00:41:50.166: INFO: validating pod update-demo-nautilus-27kw5 May 25 00:41:50.169: INFO: got data: { "image": "nautilus.jpg" } May 25 00:41:50.169: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 00:41:50.169: INFO: update-demo-nautilus-27kw5 is verified up and running May 25 00:41:50.170: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6vcrl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4053' May 25 00:41:50.340: INFO: stderr: "" May 25 00:41:50.340: INFO: stdout: "" May 25 00:41:50.340: INFO: update-demo-nautilus-6vcrl is created but not running May 25 00:41:55.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4053' May 25 00:41:55.463: INFO: stderr: "" May 25 00:41:55.463: INFO: stdout: "update-demo-nautilus-27kw5 update-demo-nautilus-6vcrl " May 25 00:41:55.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27kw5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4053' May 25 00:41:55.567: INFO: stderr: "" May 25 00:41:55.567: INFO: stdout: "true" May 25 00:41:55.568: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27kw5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4053' May 25 00:41:55.671: INFO: stderr: "" May 25 00:41:55.671: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 00:41:55.671: INFO: validating pod update-demo-nautilus-27kw5 May 25 00:41:55.675: INFO: got data: { "image": "nautilus.jpg" } May 25 00:41:55.675: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 00:41:55.675: INFO: update-demo-nautilus-27kw5 is verified up and running May 25 00:41:55.675: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6vcrl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4053' May 25 00:41:55.780: INFO: stderr: "" May 25 00:41:55.780: INFO: stdout: "true" May 25 00:41:55.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6vcrl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4053' May 25 00:41:55.886: INFO: stderr: "" May 25 00:41:55.886: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 00:41:55.886: INFO: validating pod update-demo-nautilus-6vcrl May 25 00:41:55.890: INFO: got data: { "image": "nautilus.jpg" } May 25 00:41:55.890: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 00:41:55.890: INFO: update-demo-nautilus-6vcrl is verified up and running STEP: using delete to clean up resources May 25 00:41:55.890: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4053' May 25 00:41:56.001: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 00:41:56.001: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 25 00:41:56.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4053' May 25 00:41:56.100: INFO: stderr: "No resources found in kubectl-4053 namespace.\n" May 25 00:41:56.101: INFO: stdout: "" May 25 00:41:56.101: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4053 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 25 00:41:56.208: INFO: stderr: "" May 25 00:41:56.208: INFO: stdout: "update-demo-nautilus-27kw5\nupdate-demo-nautilus-6vcrl\n" May 25 00:41:56.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4053' May 25 00:41:56.811: INFO: stderr: "No resources found in kubectl-4053 namespace.\n" May 25 00:41:56.811: INFO: stdout: "" May 25 00:41:56.811: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4053 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 25 00:41:56.918: INFO: stderr: "" May 25 00:41:56.918: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:41:56.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4053" for this suite. • [SLOW TEST:28.958 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":288,"completed":187,"skipped":2909,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:41:56.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 25 00:41:57.311: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:42:05.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1137" for this suite. • [SLOW TEST:8.420 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":288,"completed":188,"skipped":2916,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:42:05.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7954 STEP: creating service affinity-clusterip-transition in namespace services-7954 STEP: creating replication controller affinity-clusterip-transition in namespace services-7954 I0525 00:42:05.489905 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-7954, replica count: 3 I0525 00:42:08.540369 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 00:42:11.540642 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 00:42:11.546: INFO: Creating new exec pod May 25 00:42:16.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7954 execpod-affinityfkhm8 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 25 00:42:16.841: INFO: stderr: "I0525 00:42:16.733095 4136 log.go:172] (0xc0000e8bb0) (0xc0006f5cc0) Create stream\nI0525 00:42:16.733286 4136 log.go:172] (0xc0000e8bb0) (0xc0006f5cc0) Stream added, broadcasting: 1\nI0525 00:42:16.735815 4136 log.go:172] (0xc0000e8bb0) Reply frame received for 1\nI0525 00:42:16.735855 4136 log.go:172] (0xc0000e8bb0) (0xc0006d05a0) Create stream\nI0525 00:42:16.735862 4136 log.go:172] (0xc0000e8bb0) (0xc0006d05a0) Stream added, broadcasting: 3\nI0525 00:42:16.736781 4136 log.go:172] (0xc0000e8bb0) Reply frame received for 3\nI0525 00:42:16.736817 4136 log.go:172] (0xc0000e8bb0) (0xc000716aa0) Create stream\nI0525 00:42:16.736830 4136 log.go:172] (0xc0000e8bb0) (0xc000716aa0) Stream added, broadcasting: 5\nI0525 00:42:16.737902 4136 log.go:172] (0xc0000e8bb0) Reply frame received for 5\nI0525 00:42:16.807531 4136 log.go:172] (0xc0000e8bb0) Data frame received for 5\nI0525 00:42:16.807558 4136 log.go:172] (0xc000716aa0) (5) Data frame handling\nI0525 00:42:16.807575 4136 log.go:172] (0xc000716aa0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0525 00:42:16.834035 4136 log.go:172] (0xc0000e8bb0) Data frame received for 5\nI0525 00:42:16.834071 4136 log.go:172] (0xc000716aa0) (5) Data frame handling\nI0525 00:42:16.834128 4136 log.go:172] (0xc000716aa0) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0525 00:42:16.834335 4136 log.go:172] (0xc0000e8bb0) Data frame received for 5\nI0525 00:42:16.834371 4136 log.go:172] (0xc000716aa0) (5) Data frame handling\nI0525 00:42:16.834578 4136 log.go:172] (0xc0000e8bb0) Data frame received for 3\nI0525 00:42:16.834597 4136 log.go:172] (0xc0006d05a0) (3) Data frame handling\nI0525 00:42:16.836150 4136 log.go:172] (0xc0000e8bb0) Data frame received for 1\nI0525 00:42:16.836166 4136 log.go:172] (0xc0006f5cc0) (1) Data frame handling\nI0525 00:42:16.836179 4136 log.go:172] (0xc0006f5cc0) (1) Data frame sent\nI0525 00:42:16.836326 4136 log.go:172] (0xc0000e8bb0) (0xc0006f5cc0) Stream removed, broadcasting: 1\nI0525 00:42:16.836402 4136 log.go:172] (0xc0000e8bb0) Go away received\nI0525 00:42:16.836792 4136 log.go:172] (0xc0000e8bb0) (0xc0006f5cc0) Stream removed, broadcasting: 1\nI0525 00:42:16.836816 4136 log.go:172] (0xc0000e8bb0) (0xc0006d05a0) Stream removed, broadcasting: 3\nI0525 00:42:16.836831 4136 log.go:172] (0xc0000e8bb0) (0xc000716aa0) Stream removed, broadcasting: 5\n" May 25 00:42:16.842: INFO: stdout: "" May 25 00:42:16.842: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7954 execpod-affinityfkhm8 -- /bin/sh -x -c nc -zv -t -w 2 10.110.118.153 80' May 25 00:42:17.040: INFO: stderr: "I0525 00:42:16.981583 4159 log.go:172] (0xc0000de370) (0xc000531cc0) Create stream\nI0525 00:42:16.981647 4159 log.go:172] (0xc0000de370) (0xc000531cc0) Stream added, broadcasting: 1\nI0525 00:42:16.983452 4159 log.go:172] (0xc0000de370) Reply frame received for 1\nI0525 00:42:16.983492 4159 log.go:172] (0xc0000de370) (0xc00043e320) Create stream\nI0525 00:42:16.983502 4159 log.go:172] (0xc0000de370) (0xc00043e320) Stream added, broadcasting: 3\nI0525 00:42:16.984553 4159 log.go:172] (0xc0000de370) Reply frame received for 3\nI0525 00:42:16.984583 4159 log.go:172] (0xc0000de370) (0xc00043f2c0) Create stream\nI0525 00:42:16.984593 4159 log.go:172] (0xc0000de370) (0xc00043f2c0) Stream added, broadcasting: 5\nI0525 00:42:16.985760 4159 log.go:172] (0xc0000de370) Reply frame received for 5\nI0525 00:42:17.033884 4159 log.go:172] (0xc0000de370) Data frame received for 5\nI0525 00:42:17.033930 4159 log.go:172] (0xc00043f2c0) (5) Data frame handling\nI0525 00:42:17.033959 4159 log.go:172] (0xc00043f2c0) (5) Data frame sent\n+ nc -zv -t -w 2 10.110.118.153 80\nConnection to 10.110.118.153 80 port [tcp/http] succeeded!\nI0525 00:42:17.033993 4159 log.go:172] (0xc0000de370) Data frame received for 3\nI0525 00:42:17.034005 4159 log.go:172] (0xc00043e320) (3) Data frame handling\nI0525 00:42:17.034028 4159 log.go:172] (0xc0000de370) Data frame received for 5\nI0525 00:42:17.034046 4159 log.go:172] (0xc00043f2c0) (5) Data frame handling\nI0525 00:42:17.035549 4159 log.go:172] (0xc0000de370) Data frame received for 1\nI0525 00:42:17.035581 4159 log.go:172] (0xc000531cc0) (1) Data frame handling\nI0525 00:42:17.035604 4159 log.go:172] (0xc000531cc0) (1) Data frame sent\nI0525 00:42:17.035625 4159 log.go:172] (0xc0000de370) (0xc000531cc0) Stream removed, broadcasting: 1\nI0525 00:42:17.035646 4159 log.go:172] (0xc0000de370) Go away received\nI0525 00:42:17.036039 4159 log.go:172] (0xc0000de370) (0xc000531cc0) Stream removed, broadcasting: 1\nI0525 00:42:17.036053 4159 log.go:172] (0xc0000de370) (0xc00043e320) Stream removed, broadcasting: 3\nI0525 00:42:17.036059 4159 log.go:172] (0xc0000de370) (0xc00043f2c0) Stream removed, broadcasting: 5\n" May 25 00:42:17.040: INFO: stdout: "" May 25 00:42:17.049: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7954 execpod-affinityfkhm8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.110.118.153:80/ ; done' May 25 00:42:17.353: INFO: stderr: "I0525 00:42:17.186824 4179 log.go:172] (0xc0009bf130) (0xc0006c1540) Create stream\nI0525 00:42:17.186886 4179 log.go:172] (0xc0009bf130) (0xc0006c1540) Stream added, broadcasting: 1\nI0525 00:42:17.191358 4179 log.go:172] (0xc0009bf130) Reply frame received for 1\nI0525 00:42:17.191399 4179 log.go:172] (0xc0009bf130) (0xc000674d20) Create stream\nI0525 00:42:17.191408 4179 log.go:172] (0xc0009bf130) (0xc000674d20) Stream added, broadcasting: 3\nI0525 00:42:17.192170 4179 log.go:172] (0xc0009bf130) Reply frame received for 3\nI0525 00:42:17.192189 4179 log.go:172] (0xc0009bf130) (0xc0006685a0) Create stream\nI0525 00:42:17.192197 4179 log.go:172] (0xc0009bf130) (0xc0006685a0) Stream added, broadcasting: 5\nI0525 00:42:17.192888 4179 log.go:172] (0xc0009bf130) Reply frame received for 5\nI0525 00:42:17.246630 4179 log.go:172] (0xc0009bf130) Data frame received for 5\nI0525 00:42:17.246658 4179 log.go:172] (0xc0006685a0) (5) Data frame handling\nI0525 00:42:17.246669 4179 log.go:172] (0xc0006685a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.246683 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.246690 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.246697 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.251819 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.251840 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.251859 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.252178 4179 log.go:172] (0xc0009bf130) Data frame received for 5\nI0525 00:42:17.252199 4179 log.go:172] (0xc0006685a0) (5) Data frame handling\nI0525 00:42:17.252213 4179 log.go:172] (0xc0006685a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.252231 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.252241 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.252253 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.255774 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.255792 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.255808 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.256387 4179 log.go:172] (0xc0009bf130) Data frame received for 5\nI0525 00:42:17.256401 4179 log.go:172] (0xc0006685a0) (5) Data frame handling\nI0525 00:42:17.256414 4179 log.go:172] (0xc0006685a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.256431 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.256451 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.256470 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.263981 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.264006 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.264029 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.264819 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.264856 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.264870 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.264893 4179 log.go:172] (0xc0009bf130) Data frame received for 5\nI0525 00:42:17.264911 4179 log.go:172] (0xc0006685a0) (5) Data frame handling\nI0525 00:42:17.264935 4179 log.go:172] (0xc0006685a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.270291 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.270318 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.270335 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.271200 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.271222 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.271233 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.271259 4179 log.go:172] (0xc0009bf130) Data frame received for 5\nI0525 00:42:17.271285 4179 log.go:172] (0xc0006685a0) (5) Data frame handling\nI0525 00:42:17.271308 4179 log.go:172] (0xc0006685a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.276616 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.276633 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.276729 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.277036 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.277055 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.277063 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.277075 4179 log.go:172] (0xc0009bf130) Data frame received for 5\nI0525 00:42:17.277084 4179 log.go:172] (0xc0006685a0) (5) Data frame handling\nI0525 00:42:17.277097 4179 log.go:172] (0xc0006685a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.281882 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.281898 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.281912 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.282470 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.282485 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.282496 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.282554 4179 log.go:172] (0xc0009bf130) Data frame received for 5\nI0525 00:42:17.282567 4179 log.go:172] (0xc0006685a0) (5) Data frame handling\nI0525 00:42:17.282575 4179 log.go:172] (0xc0006685a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.286934 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.286969 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.286990 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.287494 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.287526 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.287539 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.287559 4179 log.go:172] (0xc0009bf130) Data frame received for 5\nI0525 00:42:17.287570 4179 log.go:172] (0xc0006685a0) (5) Data frame handling\nI0525 00:42:17.287578 4179 log.go:172] (0xc0006685a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.294050 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.294070 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.294089 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.294713 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.294741 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.294752 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.294766 4179 log.go:172] (0xc0009bf130) Data frame received for 5\nI0525 00:42:17.294775 4179 log.go:172] (0xc0006685a0) (5) Data frame handling\nI0525 00:42:17.294783 4179 log.go:172] (0xc0006685a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.299258 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.299291 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.299308 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.299828 4179 log.go:172] (0xc0009bf130) Data frame received for 5\nI0525 00:42:17.299868 4179 log.go:172] (0xc0006685a0) (5) Data frame handling\nI0525 00:42:17.299893 4179 log.go:172] (0xc0006685a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.299920 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.299936 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.299968 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.304528 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.304550 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.304576 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.304915 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.304933 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.304940 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.304949 4179 log.go:172] (0xc0009bf130) Data frame received for 5\nI0525 00:42:17.304956 4179 log.go:172] (0xc0006685a0) (5) Data frame handling\nI0525 00:42:17.304964 4179 log.go:172] (0xc0006685a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.310932 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.310954 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.310965 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.311568 4179 log.go:172] (0xc0009bf130) Data frame received for 5\nI0525 00:42:17.311584 4179 log.go:172] (0xc0006685a0) (5) Data frame handling\nI0525 00:42:17.311600 4179 log.go:172] (0xc0006685a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0525 00:42:17.311675 4179 log.go:172] (0xc0009bf130) Data frame received for 5\nI0525 00:42:17.311704 4179 log.go:172] (0xc0006685a0) (5) Data frame handling\n http://10.110.118.153:80/\nI0525 00:42:17.311724 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.311771 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.311778 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.311793 4179 log.go:172] (0xc0006685a0) (5) Data frame sent\nI0525 00:42:17.315567 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.315589 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.315605 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.316044 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.316059 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.316080 4179 log.go:172] (0xc0009bf130) Data frame received for 5\nI0525 00:42:17.316096 4179 log.go:172] (0xc0006685a0) (5) Data frame handling\nI0525 00:42:17.316110 4179 log.go:172] (0xc0006685a0) (5) Data frame sent\n+ echo\n+ curlI0525 00:42:17.316123 4179 log.go:172] (0xc0009bf130) Data frame received for 5\nI0525 00:42:17.316135 4179 log.go:172] (0xc0006685a0) (5) Data frame handling\nI0525 00:42:17.316145 4179 log.go:172] (0xc0006685a0) (5) Data frame sent\n -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.316160 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.321064 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.321089 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.321268 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.322096 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.322118 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.322130 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.322143 4179 log.go:172] (0xc0009bf130) Data frame received for 5\nI0525 00:42:17.322153 4179 log.go:172] (0xc0006685a0) (5) Data frame handling\nI0525 00:42:17.322168 4179 log.go:172] (0xc0006685a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.326188 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.326205 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.326214 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.326497 4179 log.go:172] (0xc0009bf130) Data frame received for 5\nI0525 00:42:17.326517 4179 log.go:172] (0xc0006685a0) (5) Data frame handling\nI0525 00:42:17.326528 4179 log.go:172] (0xc0006685a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.326602 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.326615 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.326631 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.332518 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.332546 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.332569 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.333106 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.333336 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.333350 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.333368 4179 log.go:172] (0xc0009bf130) Data frame received for 5\nI0525 00:42:17.333377 4179 log.go:172] (0xc0006685a0) (5) Data frame handling\nI0525 00:42:17.333392 4179 log.go:172] (0xc0006685a0) (5) Data frame sent\nI0525 00:42:17.333409 4179 log.go:172] (0xc0009bf130) Data frame received for 5\nI0525 00:42:17.333420 4179 log.go:172] (0xc0006685a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.333438 4179 log.go:172] (0xc0006685a0) (5) Data frame sent\nI0525 00:42:17.345929 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.345949 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.345961 4179 log.go:172] (0xc000674d20) (3) Data frame sent\nI0525 00:42:17.346778 4179 log.go:172] (0xc0009bf130) Data frame received for 5\nI0525 00:42:17.346790 4179 log.go:172] (0xc0006685a0) (5) Data frame handling\nI0525 00:42:17.347065 4179 log.go:172] (0xc0009bf130) Data frame received for 3\nI0525 00:42:17.347091 4179 log.go:172] (0xc000674d20) (3) Data frame handling\nI0525 00:42:17.348717 4179 log.go:172] (0xc0009bf130) Data frame received for 1\nI0525 00:42:17.348727 4179 log.go:172] (0xc0006c1540) (1) Data frame handling\nI0525 00:42:17.348733 4179 log.go:172] (0xc0006c1540) (1) Data frame sent\nI0525 00:42:17.348883 4179 log.go:172] (0xc0009bf130) (0xc0006c1540) Stream removed, broadcasting: 1\nI0525 00:42:17.348919 4179 log.go:172] (0xc0009bf130) Go away received\nI0525 00:42:17.349309 4179 log.go:172] (0xc0009bf130) (0xc0006c1540) Stream removed, broadcasting: 1\nI0525 00:42:17.349323 4179 log.go:172] (0xc0009bf130) (0xc000674d20) Stream removed, broadcasting: 3\nI0525 00:42:17.349329 4179 log.go:172] (0xc0009bf130) (0xc0006685a0) Stream removed, broadcasting: 5\n" May 25 00:42:17.354: INFO: stdout: "\naffinity-clusterip-transition-qdsrg\naffinity-clusterip-transition-hnrlh\naffinity-clusterip-transition-qdsrg\naffinity-clusterip-transition-q25l5\naffinity-clusterip-transition-hnrlh\naffinity-clusterip-transition-hnrlh\naffinity-clusterip-transition-hnrlh\naffinity-clusterip-transition-hnrlh\naffinity-clusterip-transition-qdsrg\naffinity-clusterip-transition-qdsrg\naffinity-clusterip-transition-q25l5\naffinity-clusterip-transition-qdsrg\naffinity-clusterip-transition-qdsrg\naffinity-clusterip-transition-qdsrg\naffinity-clusterip-transition-q25l5\naffinity-clusterip-transition-q25l5" May 25 00:42:17.354: INFO: Received response from host: May 25 00:42:17.354: INFO: Received response from host: affinity-clusterip-transition-qdsrg May 25 00:42:17.354: INFO: Received response from host: affinity-clusterip-transition-hnrlh May 25 00:42:17.354: INFO: Received response from host: affinity-clusterip-transition-qdsrg May 25 00:42:17.354: INFO: Received response from host: affinity-clusterip-transition-q25l5 May 25 00:42:17.354: INFO: Received response from host: affinity-clusterip-transition-hnrlh May 25 00:42:17.354: INFO: Received response from host: affinity-clusterip-transition-hnrlh May 25 00:42:17.354: INFO: Received response from host: affinity-clusterip-transition-hnrlh May 25 00:42:17.354: INFO: Received response from host: affinity-clusterip-transition-hnrlh May 25 00:42:17.354: INFO: Received response from host: affinity-clusterip-transition-qdsrg May 25 00:42:17.354: INFO: Received response from host: affinity-clusterip-transition-qdsrg May 25 00:42:17.354: INFO: Received response from host: affinity-clusterip-transition-q25l5 May 25 00:42:17.354: INFO: Received response from host: affinity-clusterip-transition-qdsrg May 25 00:42:17.354: INFO: Received response from host: affinity-clusterip-transition-qdsrg May 25 00:42:17.354: INFO: Received response from host: affinity-clusterip-transition-qdsrg May 25 00:42:17.354: INFO: Received response from host: affinity-clusterip-transition-q25l5 May 25 00:42:17.354: INFO: Received response from host: affinity-clusterip-transition-q25l5 May 25 00:42:17.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7954 execpod-affinityfkhm8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.110.118.153:80/ ; done' May 25 00:42:17.657: INFO: stderr: "I0525 00:42:17.521346 4199 log.go:172] (0xc00003a0b0) (0xc0006b4dc0) Create stream\nI0525 00:42:17.521387 4199 log.go:172] (0xc00003a0b0) (0xc0006b4dc0) Stream added, broadcasting: 1\nI0525 00:42:17.523110 4199 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0525 00:42:17.523131 4199 log.go:172] (0xc00003a0b0) (0xc000654460) Create stream\nI0525 00:42:17.523139 4199 log.go:172] (0xc00003a0b0) (0xc000654460) Stream added, broadcasting: 3\nI0525 00:42:17.523860 4199 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0525 00:42:17.523890 4199 log.go:172] (0xc00003a0b0) (0xc000654d20) Create stream\nI0525 00:42:17.523898 4199 log.go:172] (0xc00003a0b0) (0xc000654d20) Stream added, broadcasting: 5\nI0525 00:42:17.524534 4199 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0525 00:42:17.571324 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.571355 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.571366 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.571386 4199 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0525 00:42:17.571393 4199 log.go:172] (0xc000654d20) (5) Data frame handling\nI0525 00:42:17.571401 4199 log.go:172] (0xc000654d20) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.574040 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.574074 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.574114 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.574312 4199 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0525 00:42:17.574340 4199 log.go:172] (0xc000654d20) (5) Data frame handling\nI0525 00:42:17.574359 4199 log.go:172] (0xc000654d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0525 00:42:17.574377 4199 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0525 00:42:17.574389 4199 log.go:172] (0xc000654d20) (5) Data frame handling\nI0525 00:42:17.574400 4199 log.go:172] (0xc000654d20) (5) Data frame sent\n 2 http://10.110.118.153:80/\nI0525 00:42:17.574418 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.574429 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.574439 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.580695 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.580719 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.580736 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.581438 4199 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0525 00:42:17.581461 4199 log.go:172] (0xc000654d20) (5) Data frame handling\nI0525 00:42:17.581510 4199 log.go:172] (0xc000654d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.581522 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.581533 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.581544 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.587363 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.587376 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.587385 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.587808 4199 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0525 00:42:17.587835 4199 log.go:172] (0xc000654d20) (5) Data frame handling\nI0525 00:42:17.587847 4199 log.go:172] (0xc000654d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.587866 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.587892 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.587914 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.592201 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.592222 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.592235 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.592648 4199 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0525 00:42:17.592667 4199 log.go:172] (0xc000654d20) (5) Data frame handling\nI0525 00:42:17.592677 4199 log.go:172] (0xc000654d20) (5) Data frame sent\nI0525 00:42:17.592687 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.592697 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.592708 4199 log.go:172] (0xc000654460) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.597621 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.597635 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.597646 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.598008 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.598031 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.598041 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.598054 4199 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0525 00:42:17.598069 4199 log.go:172] (0xc000654d20) (5) Data frame handling\nI0525 00:42:17.598085 4199 log.go:172] (0xc000654d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.602326 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.602337 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.602343 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.602776 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.602786 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.602792 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.602799 4199 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0525 00:42:17.602803 4199 log.go:172] (0xc000654d20) (5) Data frame handling\nI0525 00:42:17.602809 4199 log.go:172] (0xc000654d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.606656 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.606667 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.606673 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.606958 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.606966 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.606971 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.606985 4199 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0525 00:42:17.607005 4199 log.go:172] (0xc000654d20) (5) Data frame handling\nI0525 00:42:17.607039 4199 log.go:172] (0xc000654d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.611709 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.611738 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.611762 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.612101 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.612123 4199 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0525 00:42:17.612151 4199 log.go:172] (0xc000654d20) (5) Data frame handling\nI0525 00:42:17.612173 4199 log.go:172] (0xc000654d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.612195 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.612208 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.616609 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.616635 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.616656 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.617020 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.617040 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.617058 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.617077 4199 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0525 00:42:17.617090 4199 log.go:172] (0xc000654d20) (5) Data frame handling\nI0525 00:42:17.617099 4199 log.go:172] (0xc000654d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.620958 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.621032 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.621063 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.621811 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.621868 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.621906 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.621931 4199 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0525 00:42:17.621943 4199 log.go:172] (0xc000654d20) (5) Data frame handling\nI0525 00:42:17.621959 4199 log.go:172] (0xc000654d20) (5) Data frame sent\nI0525 00:42:17.621972 4199 log.go:172] (0xc00003a0b0) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/I0525 00:42:17.621987 4199 log.go:172] (0xc000654d20) (5) Data frame handling\nI0525 00:42:17.622008 4199 log.go:172] (0xc000654d20) (5) Data frame sent\n\nI0525 00:42:17.625849 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.625874 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.625896 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.626228 4199 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0525 00:42:17.626250 4199 log.go:172] (0xc000654d20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.626268 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.626324 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.626345 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.626366 4199 log.go:172] (0xc000654d20) (5) Data frame sent\nI0525 00:42:17.630046 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.630076 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.630101 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.630389 4199 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0525 00:42:17.630401 4199 log.go:172] (0xc000654d20) (5) Data frame handling\nI0525 00:42:17.630422 4199 log.go:172] (0xc000654d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.630437 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.630450 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.630466 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.634204 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.634232 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.634262 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.634445 4199 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0525 00:42:17.634475 4199 log.go:172] (0xc000654d20) (5) Data frame handling\nI0525 00:42:17.634493 4199 log.go:172] (0xc000654d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.634507 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.634515 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.634524 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.638548 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.638570 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.638588 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.639339 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.639369 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.639380 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.639392 4199 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0525 00:42:17.639399 4199 log.go:172] (0xc000654d20) (5) Data frame handling\nI0525 00:42:17.639407 4199 log.go:172] (0xc000654d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.643492 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.643521 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.643549 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.643837 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.643852 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.643861 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.643874 4199 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0525 00:42:17.643881 4199 log.go:172] (0xc000654d20) (5) Data frame handling\nI0525 00:42:17.643891 4199 log.go:172] (0xc000654d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.118.153:80/\nI0525 00:42:17.648859 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.648889 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.649101 4199 log.go:172] (0xc000654460) (3) Data frame sent\nI0525 00:42:17.651768 4199 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 00:42:17.651818 4199 log.go:172] (0xc000654460) (3) Data frame handling\nI0525 00:42:17.651855 4199 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0525 00:42:17.651893 4199 log.go:172] (0xc000654d20) (5) Data frame handling\nI0525 00:42:17.653756 4199 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0525 00:42:17.653768 4199 log.go:172] (0xc0006b4dc0) (1) Data frame handling\nI0525 00:42:17.653779 4199 log.go:172] (0xc0006b4dc0) (1) Data frame sent\nI0525 00:42:17.653841 4199 log.go:172] (0xc00003a0b0) (0xc0006b4dc0) Stream removed, broadcasting: 1\nI0525 00:42:17.653904 4199 log.go:172] (0xc00003a0b0) Go away received\nI0525 00:42:17.654121 4199 log.go:172] (0xc00003a0b0) (0xc0006b4dc0) Stream removed, broadcasting: 1\nI0525 00:42:17.654132 4199 log.go:172] (0xc00003a0b0) (0xc000654460) Stream removed, broadcasting: 3\nI0525 00:42:17.654137 4199 log.go:172] (0xc00003a0b0) (0xc000654d20) Stream removed, broadcasting: 5\n" May 25 00:42:17.658: INFO: stdout: "\naffinity-clusterip-transition-q25l5\naffinity-clusterip-transition-q25l5\naffinity-clusterip-transition-q25l5\naffinity-clusterip-transition-q25l5\naffinity-clusterip-transition-q25l5\naffinity-clusterip-transition-q25l5\naffinity-clusterip-transition-q25l5\naffinity-clusterip-transition-q25l5\naffinity-clusterip-transition-q25l5\naffinity-clusterip-transition-q25l5\naffinity-clusterip-transition-q25l5\naffinity-clusterip-transition-q25l5\naffinity-clusterip-transition-q25l5\naffinity-clusterip-transition-q25l5\naffinity-clusterip-transition-q25l5\naffinity-clusterip-transition-q25l5" May 25 00:42:17.658: INFO: Received response from host: May 25 00:42:17.658: INFO: Received response from host: affinity-clusterip-transition-q25l5 May 25 00:42:17.658: INFO: Received response from host: affinity-clusterip-transition-q25l5 May 25 00:42:17.658: INFO: Received response from host: affinity-clusterip-transition-q25l5 May 25 00:42:17.658: INFO: Received response from host: affinity-clusterip-transition-q25l5 May 25 00:42:17.658: INFO: Received response from host: affinity-clusterip-transition-q25l5 May 25 00:42:17.658: INFO: Received response from host: affinity-clusterip-transition-q25l5 May 25 00:42:17.658: INFO: Received response from host: affinity-clusterip-transition-q25l5 May 25 00:42:17.658: INFO: Received response from host: affinity-clusterip-transition-q25l5 May 25 00:42:17.658: INFO: Received response from host: affinity-clusterip-transition-q25l5 May 25 00:42:17.658: INFO: Received response from host: affinity-clusterip-transition-q25l5 May 25 00:42:17.658: INFO: Received response from host: affinity-clusterip-transition-q25l5 May 25 00:42:17.658: INFO: Received response from host: affinity-clusterip-transition-q25l5 May 25 00:42:17.658: INFO: Received response from host: affinity-clusterip-transition-q25l5 May 25 00:42:17.658: INFO: Received response from host: affinity-clusterip-transition-q25l5 May 25 00:42:17.658: INFO: Received response from host: affinity-clusterip-transition-q25l5 May 25 00:42:17.658: INFO: Received response from host: affinity-clusterip-transition-q25l5 May 25 00:42:17.658: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-7954, will wait for the garbage collector to delete the pods May 25 00:42:17.778: INFO: Deleting ReplicationController affinity-clusterip-transition took: 5.375545ms May 25 00:42:18.179: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 400.224316ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:42:25.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7954" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:20.033 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":189,"skipped":2929,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:42:25.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-c080ab12-77ff-46c7-b4b2-830dab4c098b in namespace container-probe-7063 May 25 00:42:29.566: INFO: Started pod liveness-c080ab12-77ff-46c7-b4b2-830dab4c098b in namespace container-probe-7063 STEP: checking the pod's current state and verifying that restartCount is present May 25 00:42:29.568: INFO: Initial restart count of pod liveness-c080ab12-77ff-46c7-b4b2-830dab4c098b is 0 May 25 00:42:51.688: INFO: Restart count of pod container-probe-7063/liveness-c080ab12-77ff-46c7-b4b2-830dab4c098b is now 1 (22.119555168s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:42:51.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7063" for this suite. • [SLOW TEST:26.442 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":190,"skipped":2941,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:42:51.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-1752/configmap-test-340c6205-b4f3-4f42-bcb2-8b41d5e6cbe1 STEP: Creating a pod to test consume configMaps May 25 00:42:52.009: INFO: Waiting up to 5m0s for pod "pod-configmaps-66f9c020-47ac-4f67-8260-6bd61e7c4152" in namespace "configmap-1752" to be "Succeeded or Failed" May 25 00:42:52.155: INFO: Pod "pod-configmaps-66f9c020-47ac-4f67-8260-6bd61e7c4152": Phase="Pending", Reason="", readiness=false. Elapsed: 146.494434ms May 25 00:42:54.287: INFO: Pod "pod-configmaps-66f9c020-47ac-4f67-8260-6bd61e7c4152": Phase="Pending", Reason="", readiness=false. Elapsed: 2.277956046s May 25 00:42:56.291: INFO: Pod "pod-configmaps-66f9c020-47ac-4f67-8260-6bd61e7c4152": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.282153035s STEP: Saw pod success May 25 00:42:56.291: INFO: Pod "pod-configmaps-66f9c020-47ac-4f67-8260-6bd61e7c4152" satisfied condition "Succeeded or Failed" May 25 00:42:56.294: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-66f9c020-47ac-4f67-8260-6bd61e7c4152 container env-test: STEP: delete the pod May 25 00:42:56.342: INFO: Waiting for pod pod-configmaps-66f9c020-47ac-4f67-8260-6bd61e7c4152 to disappear May 25 00:42:56.394: INFO: Pod pod-configmaps-66f9c020-47ac-4f67-8260-6bd61e7c4152 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:42:56.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1752" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":191,"skipped":3014,"failed":0} ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:42:56.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:42:56.520: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 25 00:43:01.524: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 25 00:43:01.524: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 25 00:43:01.628: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-3912 /apis/apps/v1/namespaces/deployment-3912/deployments/test-cleanup-deployment 08db58d3-fc44-416b-b5fc-ee54d7955e85 7426984 1 2020-05-25 00:43:01 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-05-25 00:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a266f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 25 00:43:01.670: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694 deployment-3912 /apis/apps/v1/namespaces/deployment-3912/replicasets/test-cleanup-deployment-6688745694 d6876176-809a-4f83-b688-6766af5c01fa 7426993 1 2020-05-25 00:43:01 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 08db58d3-fc44-416b-b5fc-ee54d7955e85 0xc002f7e6f7 0xc002f7e6f8}] [] [{kube-controller-manager Update apps/v1 2020-05-25 00:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"08db58d3-fc44-416b-b5fc-ee54d7955e85\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f7e788 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 00:43:01.670: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 25 00:43:01.670: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-3912 /apis/apps/v1/namespaces/deployment-3912/replicasets/test-cleanup-controller 8ade6067-e650-43d6-b084-3535808af4cf 7426986 1 2020-05-25 00:42:56 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 08db58d3-fc44-416b-b5fc-ee54d7955e85 0xc002f7e467 0xc002f7e468}] [] [{e2e.test Update apps/v1 2020-05-25 00:42:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-25 00:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"08db58d3-fc44-416b-b5fc-ee54d7955e85\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002f7e688 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 25 00:43:01.711: INFO: Pod "test-cleanup-controller-xzjwn" is available: &Pod{ObjectMeta:{test-cleanup-controller-xzjwn test-cleanup-controller- deployment-3912 /api/v1/namespaces/deployment-3912/pods/test-cleanup-controller-xzjwn 3b538f73-1245-4d2e-9bb7-758191570e7e 7426975 0 2020-05-25 00:42:56 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 8ade6067-e650-43d6-b084-3535808af4cf 0xc002f7edc7 0xc002f7edc8}] [] [{kube-controller-manager Update v1 2020-05-25 00:42:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ade6067-e650-43d6-b084-3535808af4cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 00:42:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.190\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zwfzp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zwfzp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zwfzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 00:42:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 00:42:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 00:42:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 00:42:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.190,StartTime:2020-05-25 00:42:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 00:42:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://76f05562b4761320b4a13b9a18cc842bc489388d5de9658796f79426194b9bfa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.190,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 00:43:01.711: INFO: Pod "test-cleanup-deployment-6688745694-rj6t6" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-6688745694-rj6t6 test-cleanup-deployment-6688745694- deployment-3912 /api/v1/namespaces/deployment-3912/pods/test-cleanup-deployment-6688745694-rj6t6 cc3e6e58-e8d7-4903-93de-e4d9e2247597 7426991 0 2020-05-25 00:43:01 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 d6876176-809a-4f83-b688-6766af5c01fa 0xc002f7f0e7 0xc002f7f0e8}] [] [{kube-controller-manager Update v1 2020-05-25 00:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d6876176-809a-4f83-b688-6766af5c01fa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zwfzp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zwfzp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zwfzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 00:43:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:43:01.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3912" for this suite. • [SLOW TEST:5.317 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":288,"completed":192,"skipped":3014,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:43:01.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:45:01.918: INFO: Deleting pod "var-expansion-e7f229e3-65a1-4e90-a3f7-7fec142ae6ca" in namespace "var-expansion-1521" May 25 00:45:01.923: INFO: Wait up to 5m0s for pod "var-expansion-e7f229e3-65a1-4e90-a3f7-7fec142ae6ca" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:45:03.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1521" for this suite. • [SLOW TEST:122.257 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":288,"completed":193,"skipped":3061,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:45:03.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 25 00:45:09.384: INFO: Successfully updated pod "annotationupdate9a8902af-05cc-4b16-b188-07ba1239f4d7" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:45:11.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3622" for this suite. • [SLOW TEST:7.452 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":194,"skipped":3062,"failed":0} [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:45:11.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 25 00:45:15.557: INFO: &Pod{ObjectMeta:{send-events-d9748c7d-756d-4fe4-ab3b-f66e6900b56b events-6300 /api/v1/namespaces/events-6300/pods/send-events-d9748c7d-756d-4fe4-ab3b-f66e6900b56b 777a9ba4-c58a-42a9-88c1-c31cf8159444 7427481 0 2020-05-25 00:45:11 +0000 UTC map[name:foo time:487002850] map[] [] [] [{e2e.test Update v1 2020-05-25 00:45:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 00:45:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.188\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vh4r9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vh4r9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vh4r9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 00:45:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 00:45:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 00:45:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 00:45:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.188,StartTime:2020-05-25 00:45:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 00:45:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://1ff0bea014242e61dbb25e3b8c0de8059b90fe517e4f30e17d1b939f45441e26,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.188,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 25 00:45:17.562: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 25 00:45:19.566: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:45:19.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6300" for this suite. • [SLOW TEST:8.221 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":288,"completed":195,"skipped":3062,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:45:19.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-44dbb7ec-3bb5-41cd-917b-a7fa68b535bd STEP: Creating a pod to test consume secrets May 25 00:45:19.737: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7a24128c-149e-428a-93be-9d43f3952e88" in namespace "projected-3053" to be "Succeeded or Failed" May 25 00:45:19.784: INFO: Pod "pod-projected-secrets-7a24128c-149e-428a-93be-9d43f3952e88": Phase="Pending", Reason="", readiness=false. Elapsed: 47.521573ms May 25 00:45:21.787: INFO: Pod "pod-projected-secrets-7a24128c-149e-428a-93be-9d43f3952e88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05067148s May 25 00:45:23.792: INFO: Pod "pod-projected-secrets-7a24128c-149e-428a-93be-9d43f3952e88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054950651s STEP: Saw pod success May 25 00:45:23.792: INFO: Pod "pod-projected-secrets-7a24128c-149e-428a-93be-9d43f3952e88" satisfied condition "Succeeded or Failed" May 25 00:45:23.796: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-7a24128c-149e-428a-93be-9d43f3952e88 container secret-volume-test: STEP: delete the pod May 25 00:45:23.840: INFO: Waiting for pod pod-projected-secrets-7a24128c-149e-428a-93be-9d43f3952e88 to disappear May 25 00:45:23.880: INFO: Pod pod-projected-secrets-7a24128c-149e-428a-93be-9d43f3952e88 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:45:23.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3053" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":196,"skipped":3070,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:45:23.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 25 00:45:23.938: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 00:45:23.961: INFO: Waiting for terminating namespaces to be deleted... May 25 00:45:23.964: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 25 00:45:23.970: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 25 00:45:23.970: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 25 00:45:23.970: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 25 00:45:23.970: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 25 00:45:23.970: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 25 00:45:23.970: INFO: Container kindnet-cni ready: true, restart count 0 May 25 00:45:23.970: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 25 00:45:23.970: INFO: Container kube-proxy ready: true, restart count 0 May 25 00:45:23.970: INFO: annotationupdate9a8902af-05cc-4b16-b188-07ba1239f4d7 from projected-3622 started at 2020-05-25 00:45:04 +0000 UTC (1 container statuses recorded) May 25 00:45:23.970: INFO: Container client-container ready: false, restart count 0 May 25 00:45:23.970: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 25 00:45:23.975: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 25 00:45:23.975: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 25 00:45:23.975: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 25 00:45:23.975: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 25 00:45:23.975: INFO: send-events-d9748c7d-756d-4fe4-ab3b-f66e6900b56b from events-6300 started at 2020-05-25 00:45:11 +0000 UTC (1 container statuses recorded) May 25 00:45:23.975: INFO: Container p ready: true, restart count 0 May 25 00:45:23.975: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 25 00:45:23.975: INFO: Container kindnet-cni ready: true, restart count 0 May 25 00:45:23.975: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 25 00:45:23.975: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 May 25 00:45:24.119: INFO: Pod rally-c184502e-30nwopzm requesting resource cpu=0m on Node latest-worker May 25 00:45:24.119: INFO: Pod terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 requesting resource cpu=0m on Node latest-worker2 May 25 00:45:24.119: INFO: Pod send-events-d9748c7d-756d-4fe4-ab3b-f66e6900b56b requesting resource cpu=0m on Node latest-worker2 May 25 00:45:24.119: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker May 25 00:45:24.119: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 May 25 00:45:24.119: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker May 25 00:45:24.119: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 25 00:45:24.119: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker May 25 00:45:24.124: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-c505682b-3498-412a-bd6e-3ade9a3187cc.16121ed64d77cac5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2064/filler-pod-c505682b-3498-412a-bd6e-3ade9a3187cc to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-c505682b-3498-412a-bd6e-3ade9a3187cc.16121ed6dad56e4f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c505682b-3498-412a-bd6e-3ade9a3187cc.16121ed72b752691], Reason = [Created], Message = [Created container filler-pod-c505682b-3498-412a-bd6e-3ade9a3187cc] STEP: Considering event: Type = [Normal], Name = [filler-pod-c505682b-3498-412a-bd6e-3ade9a3187cc.16121ed73a276242], Reason = [Started], Message = [Started container filler-pod-c505682b-3498-412a-bd6e-3ade9a3187cc] STEP: Considering event: Type = [Normal], Name = [filler-pod-f8cd9651-7bae-4d37-a989-2adbb7102041.16121ed64df16064], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2064/filler-pod-f8cd9651-7bae-4d37-a989-2adbb7102041 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-f8cd9651-7bae-4d37-a989-2adbb7102041.16121ed69f4dd4c5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-f8cd9651-7bae-4d37-a989-2adbb7102041.16121ed7087d5845], Reason = [Created], Message = [Created container filler-pod-f8cd9651-7bae-4d37-a989-2adbb7102041] STEP: Considering event: Type = [Normal], Name = [filler-pod-f8cd9651-7bae-4d37-a989-2adbb7102041.16121ed72d9faddc], Reason = [Started], Message = [Started container filler-pod-f8cd9651-7bae-4d37-a989-2adbb7102041] STEP: Considering event: Type = [Warning], Name = [additional-pod.16121ed7b568abb5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.16121ed7b98ab005], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:45:31.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2064" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.429 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":288,"completed":197,"skipped":3098,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:45:31.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-7dcffb35-b035-4afa-9b42-b8e3166285d0 STEP: Creating a pod to test consume secrets May 25 00:45:31.390: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d48a51e2-bcf2-457c-9301-c63efbb4bfdf" in namespace "projected-5343" to be "Succeeded or Failed" May 25 00:45:31.437: INFO: Pod "pod-projected-secrets-d48a51e2-bcf2-457c-9301-c63efbb4bfdf": Phase="Pending", Reason="", readiness=false. Elapsed: 47.38894ms May 25 00:45:33.509: INFO: Pod "pod-projected-secrets-d48a51e2-bcf2-457c-9301-c63efbb4bfdf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119565194s May 25 00:45:35.514: INFO: Pod "pod-projected-secrets-d48a51e2-bcf2-457c-9301-c63efbb4bfdf": Phase="Running", Reason="", readiness=true. Elapsed: 4.124579099s May 25 00:45:37.519: INFO: Pod "pod-projected-secrets-d48a51e2-bcf2-457c-9301-c63efbb4bfdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.128652529s STEP: Saw pod success May 25 00:45:37.519: INFO: Pod "pod-projected-secrets-d48a51e2-bcf2-457c-9301-c63efbb4bfdf" satisfied condition "Succeeded or Failed" May 25 00:45:37.521: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-d48a51e2-bcf2-457c-9301-c63efbb4bfdf container projected-secret-volume-test: STEP: delete the pod May 25 00:45:37.815: INFO: Waiting for pod pod-projected-secrets-d48a51e2-bcf2-457c-9301-c63efbb4bfdf to disappear May 25 00:45:37.842: INFO: Pod pod-projected-secrets-d48a51e2-bcf2-457c-9301-c63efbb4bfdf no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:45:37.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5343" for this suite. • [SLOW TEST:6.563 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":198,"skipped":3102,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:45:37.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 25 00:45:38.015: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2954e655-eb86-4541-a37f-1231e5264d54" in namespace "downward-api-2132" to be "Succeeded or Failed" May 25 00:45:38.070: INFO: Pod "downwardapi-volume-2954e655-eb86-4541-a37f-1231e5264d54": Phase="Pending", Reason="", readiness=false. Elapsed: 55.601378ms May 25 00:45:40.075: INFO: Pod "downwardapi-volume-2954e655-eb86-4541-a37f-1231e5264d54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060317833s May 25 00:45:42.262: INFO: Pod "downwardapi-volume-2954e655-eb86-4541-a37f-1231e5264d54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.247184349s STEP: Saw pod success May 25 00:45:42.262: INFO: Pod "downwardapi-volume-2954e655-eb86-4541-a37f-1231e5264d54" satisfied condition "Succeeded or Failed" May 25 00:45:42.266: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-2954e655-eb86-4541-a37f-1231e5264d54 container client-container: STEP: delete the pod May 25 00:45:42.530: INFO: Waiting for pod downwardapi-volume-2954e655-eb86-4541-a37f-1231e5264d54 to disappear May 25 00:45:42.536: INFO: Pod downwardapi-volume-2954e655-eb86-4541-a37f-1231e5264d54 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:45:42.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2132" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":199,"skipped":3133,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:45:42.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 25 00:45:42.665: INFO: Waiting up to 5m0s for pod "downward-api-44e7886e-1212-4a41-8bdc-ebd7f2fad78b" in namespace "downward-api-6032" to be "Succeeded or Failed" May 25 00:45:42.682: INFO: Pod "downward-api-44e7886e-1212-4a41-8bdc-ebd7f2fad78b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.449943ms May 25 00:45:44.746: INFO: Pod "downward-api-44e7886e-1212-4a41-8bdc-ebd7f2fad78b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080588955s May 25 00:45:46.750: INFO: Pod "downward-api-44e7886e-1212-4a41-8bdc-ebd7f2fad78b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084617189s STEP: Saw pod success May 25 00:45:46.750: INFO: Pod "downward-api-44e7886e-1212-4a41-8bdc-ebd7f2fad78b" satisfied condition "Succeeded or Failed" May 25 00:45:46.752: INFO: Trying to get logs from node latest-worker pod downward-api-44e7886e-1212-4a41-8bdc-ebd7f2fad78b container dapi-container: STEP: delete the pod May 25 00:45:46.792: INFO: Waiting for pod downward-api-44e7886e-1212-4a41-8bdc-ebd7f2fad78b to disappear May 25 00:45:46.808: INFO: Pod downward-api-44e7886e-1212-4a41-8bdc-ebd7f2fad78b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:45:46.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6032" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":288,"completed":200,"skipped":3142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:45:46.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 25 00:45:56.986: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6110 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:45:56.986: INFO: >>> kubeConfig: /root/.kube/config I0525 00:45:57.027084 7 log.go:172] (0xc002cebef0) (0xc00134e3c0) Create stream I0525 00:45:57.027125 7 log.go:172] (0xc002cebef0) (0xc00134e3c0) Stream added, broadcasting: 1 I0525 00:45:57.029782 7 log.go:172] (0xc002cebef0) Reply frame received for 1 I0525 00:45:57.029814 7 log.go:172] (0xc002cebef0) (0xc000b97c20) Create stream I0525 00:45:57.029825 7 log.go:172] (0xc002cebef0) (0xc000b97c20) Stream added, broadcasting: 3 I0525 00:45:57.030680 7 log.go:172] (0xc002cebef0) Reply frame received for 3 I0525 00:45:57.030710 7 log.go:172] (0xc002cebef0) (0xc000b97d60) Create stream I0525 00:45:57.030721 7 log.go:172] (0xc002cebef0) (0xc000b97d60) Stream added, broadcasting: 5 I0525 00:45:57.031514 7 log.go:172] (0xc002cebef0) Reply frame received for 5 I0525 00:45:57.091664 7 log.go:172] (0xc002cebef0) Data frame received for 3 I0525 00:45:57.091693 7 log.go:172] (0xc000b97c20) (3) Data frame handling I0525 00:45:57.091701 7 log.go:172] (0xc000b97c20) (3) Data frame sent I0525 00:45:57.091709 7 log.go:172] (0xc002cebef0) Data frame received for 3 I0525 00:45:57.091734 7 log.go:172] (0xc002cebef0) Data frame received for 5 I0525 00:45:57.091775 7 log.go:172] (0xc000b97d60) (5) Data frame handling I0525 00:45:57.091814 7 log.go:172] (0xc000b97c20) (3) Data frame handling I0525 00:45:57.093131 7 log.go:172] (0xc002cebef0) Data frame received for 1 I0525 00:45:57.093178 7 log.go:172] (0xc00134e3c0) (1) Data frame handling I0525 00:45:57.093208 7 log.go:172] (0xc00134e3c0) (1) Data frame sent I0525 00:45:57.093249 7 log.go:172] (0xc002cebef0) (0xc00134e3c0) Stream removed, broadcasting: 1 I0525 00:45:57.093271 7 log.go:172] (0xc002cebef0) Go away received I0525 00:45:57.093414 7 log.go:172] (0xc002cebef0) (0xc00134e3c0) Stream removed, broadcasting: 1 I0525 00:45:57.093439 7 log.go:172] (0xc002cebef0) (0xc000b97c20) Stream removed, broadcasting: 3 I0525 00:45:57.093452 7 log.go:172] (0xc002cebef0) (0xc000b97d60) Stream removed, broadcasting: 5 May 25 00:45:57.093: INFO: Exec stderr: "" May 25 00:45:57.093: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6110 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:45:57.093: INFO: >>> kubeConfig: /root/.kube/config I0525 00:45:57.126392 7 log.go:172] (0xc001e9e9a0) (0xc001749360) Create stream I0525 00:45:57.126419 7 log.go:172] (0xc001e9e9a0) (0xc001749360) Stream added, broadcasting: 1 I0525 00:45:57.128629 7 log.go:172] (0xc001e9e9a0) Reply frame received for 1 I0525 00:45:57.128666 7 log.go:172] (0xc001e9e9a0) (0xc00158cc80) Create stream I0525 00:45:57.128678 7 log.go:172] (0xc001e9e9a0) (0xc00158cc80) Stream added, broadcasting: 3 I0525 00:45:57.129836 7 log.go:172] (0xc001e9e9a0) Reply frame received for 3 I0525 00:45:57.129866 7 log.go:172] (0xc001e9e9a0) (0xc000b97ea0) Create stream I0525 00:45:57.129875 7 log.go:172] (0xc001e9e9a0) (0xc000b97ea0) Stream added, broadcasting: 5 I0525 00:45:57.130778 7 log.go:172] (0xc001e9e9a0) Reply frame received for 5 I0525 00:45:57.188442 7 log.go:172] (0xc001e9e9a0) Data frame received for 5 I0525 00:45:57.188467 7 log.go:172] (0xc000b97ea0) (5) Data frame handling I0525 00:45:57.188501 7 log.go:172] (0xc001e9e9a0) Data frame received for 3 I0525 00:45:57.188539 7 log.go:172] (0xc00158cc80) (3) Data frame handling I0525 00:45:57.188576 7 log.go:172] (0xc00158cc80) (3) Data frame sent I0525 00:45:57.188780 7 log.go:172] (0xc001e9e9a0) Data frame received for 3 I0525 00:45:57.188832 7 log.go:172] (0xc00158cc80) (3) Data frame handling I0525 00:45:57.190530 7 log.go:172] (0xc001e9e9a0) Data frame received for 1 I0525 00:45:57.190552 7 log.go:172] (0xc001749360) (1) Data frame handling I0525 00:45:57.190559 7 log.go:172] (0xc001749360) (1) Data frame sent I0525 00:45:57.190568 7 log.go:172] (0xc001e9e9a0) (0xc001749360) Stream removed, broadcasting: 1 I0525 00:45:57.190608 7 log.go:172] (0xc001e9e9a0) Go away received I0525 00:45:57.190693 7 log.go:172] (0xc001e9e9a0) (0xc001749360) Stream removed, broadcasting: 1 I0525 00:45:57.190744 7 log.go:172] (0xc001e9e9a0) (0xc00158cc80) Stream removed, broadcasting: 3 I0525 00:45:57.190764 7 log.go:172] (0xc001e9e9a0) (0xc000b97ea0) Stream removed, broadcasting: 5 May 25 00:45:57.190: INFO: Exec stderr: "" May 25 00:45:57.190: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6110 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:45:57.190: INFO: >>> kubeConfig: /root/.kube/config I0525 00:45:57.222899 7 log.go:172] (0xc002df69a0) (0xc00158d0e0) Create stream I0525 00:45:57.222940 7 log.go:172] (0xc002df69a0) (0xc00158d0e0) Stream added, broadcasting: 1 I0525 00:45:57.226330 7 log.go:172] (0xc002df69a0) Reply frame received for 1 I0525 00:45:57.226393 7 log.go:172] (0xc002df69a0) (0xc00134e5a0) Create stream I0525 00:45:57.226412 7 log.go:172] (0xc002df69a0) (0xc00134e5a0) Stream added, broadcasting: 3 I0525 00:45:57.228603 7 log.go:172] (0xc002df69a0) Reply frame received for 3 I0525 00:45:57.228634 7 log.go:172] (0xc002df69a0) (0xc00193c0a0) Create stream I0525 00:45:57.228656 7 log.go:172] (0xc002df69a0) (0xc00193c0a0) Stream added, broadcasting: 5 I0525 00:45:57.229965 7 log.go:172] (0xc002df69a0) Reply frame received for 5 I0525 00:45:57.278194 7 log.go:172] (0xc002df69a0) Data frame received for 5 I0525 00:45:57.278244 7 log.go:172] (0xc00193c0a0) (5) Data frame handling I0525 00:45:57.278282 7 log.go:172] (0xc002df69a0) Data frame received for 3 I0525 00:45:57.278295 7 log.go:172] (0xc00134e5a0) (3) Data frame handling I0525 00:45:57.278308 7 log.go:172] (0xc00134e5a0) (3) Data frame sent I0525 00:45:57.278380 7 log.go:172] (0xc002df69a0) Data frame received for 3 I0525 00:45:57.278392 7 log.go:172] (0xc00134e5a0) (3) Data frame handling I0525 00:45:57.279769 7 log.go:172] (0xc002df69a0) Data frame received for 1 I0525 00:45:57.279824 7 log.go:172] (0xc00158d0e0) (1) Data frame handling I0525 00:45:57.279856 7 log.go:172] (0xc00158d0e0) (1) Data frame sent I0525 00:45:57.279878 7 log.go:172] (0xc002df69a0) (0xc00158d0e0) Stream removed, broadcasting: 1 I0525 00:45:57.279897 7 log.go:172] (0xc002df69a0) Go away received I0525 00:45:57.280062 7 log.go:172] (0xc002df69a0) (0xc00158d0e0) Stream removed, broadcasting: 1 I0525 00:45:57.280085 7 log.go:172] (0xc002df69a0) (0xc00134e5a0) Stream removed, broadcasting: 3 I0525 00:45:57.280098 7 log.go:172] (0xc002df69a0) (0xc00193c0a0) Stream removed, broadcasting: 5 May 25 00:45:57.280: INFO: Exec stderr: "" May 25 00:45:57.280: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6110 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:45:57.280: INFO: >>> kubeConfig: /root/.kube/config I0525 00:45:57.341579 7 log.go:172] (0xc005cd0370) (0xc0019683c0) Create stream I0525 00:45:57.341615 7 log.go:172] (0xc005cd0370) (0xc0019683c0) Stream added, broadcasting: 1 I0525 00:45:57.345723 7 log.go:172] (0xc005cd0370) Reply frame received for 1 I0525 00:45:57.345764 7 log.go:172] (0xc005cd0370) (0xc001749680) Create stream I0525 00:45:57.345784 7 log.go:172] (0xc005cd0370) (0xc001749680) Stream added, broadcasting: 3 I0525 00:45:57.346692 7 log.go:172] (0xc005cd0370) Reply frame received for 3 I0525 00:45:57.346746 7 log.go:172] (0xc005cd0370) (0xc001968460) Create stream I0525 00:45:57.346758 7 log.go:172] (0xc005cd0370) (0xc001968460) Stream added, broadcasting: 5 I0525 00:45:57.347720 7 log.go:172] (0xc005cd0370) Reply frame received for 5 I0525 00:45:57.531240 7 log.go:172] (0xc005cd0370) Data frame received for 5 I0525 00:45:57.531292 7 log.go:172] (0xc001968460) (5) Data frame handling I0525 00:45:57.531316 7 log.go:172] (0xc005cd0370) Data frame received for 3 I0525 00:45:57.531335 7 log.go:172] (0xc001749680) (3) Data frame handling I0525 00:45:57.531348 7 log.go:172] (0xc001749680) (3) Data frame sent I0525 00:45:57.531453 7 log.go:172] (0xc005cd0370) Data frame received for 3 I0525 00:45:57.531479 7 log.go:172] (0xc001749680) (3) Data frame handling I0525 00:45:57.532600 7 log.go:172] (0xc005cd0370) Data frame received for 1 I0525 00:45:57.532634 7 log.go:172] (0xc0019683c0) (1) Data frame handling I0525 00:45:57.532655 7 log.go:172] (0xc0019683c0) (1) Data frame sent I0525 00:45:57.532676 7 log.go:172] (0xc005cd0370) (0xc0019683c0) Stream removed, broadcasting: 1 I0525 00:45:57.532704 7 log.go:172] (0xc005cd0370) Go away received I0525 00:45:57.532848 7 log.go:172] (0xc005cd0370) (0xc0019683c0) Stream removed, broadcasting: 1 I0525 00:45:57.532880 7 log.go:172] (0xc005cd0370) (0xc001749680) Stream removed, broadcasting: 3 I0525 00:45:57.532899 7 log.go:172] (0xc005cd0370) (0xc001968460) Stream removed, broadcasting: 5 May 25 00:45:57.532: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 25 00:45:57.532: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6110 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:45:57.532: INFO: >>> kubeConfig: /root/.kube/config I0525 00:45:57.569100 7 log.go:172] (0xc002df6fd0) (0xc00158d720) Create stream I0525 00:45:57.569250 7 log.go:172] (0xc002df6fd0) (0xc00158d720) Stream added, broadcasting: 1 I0525 00:45:57.571799 7 log.go:172] (0xc002df6fd0) Reply frame received for 1 I0525 00:45:57.571863 7 log.go:172] (0xc002df6fd0) (0xc00134e960) Create stream I0525 00:45:57.571883 7 log.go:172] (0xc002df6fd0) (0xc00134e960) Stream added, broadcasting: 3 I0525 00:45:57.572788 7 log.go:172] (0xc002df6fd0) Reply frame received for 3 I0525 00:45:57.572816 7 log.go:172] (0xc002df6fd0) (0xc00193c3c0) Create stream I0525 00:45:57.572830 7 log.go:172] (0xc002df6fd0) (0xc00193c3c0) Stream added, broadcasting: 5 I0525 00:45:57.573947 7 log.go:172] (0xc002df6fd0) Reply frame received for 5 I0525 00:45:57.648356 7 log.go:172] (0xc002df6fd0) Data frame received for 5 I0525 00:45:57.648382 7 log.go:172] (0xc00193c3c0) (5) Data frame handling I0525 00:45:57.648411 7 log.go:172] (0xc002df6fd0) Data frame received for 3 I0525 00:45:57.648435 7 log.go:172] (0xc00134e960) (3) Data frame handling I0525 00:45:57.648458 7 log.go:172] (0xc00134e960) (3) Data frame sent I0525 00:45:57.648471 7 log.go:172] (0xc002df6fd0) Data frame received for 3 I0525 00:45:57.648487 7 log.go:172] (0xc00134e960) (3) Data frame handling I0525 00:45:57.650119 7 log.go:172] (0xc002df6fd0) Data frame received for 1 I0525 00:45:57.650137 7 log.go:172] (0xc00158d720) (1) Data frame handling I0525 00:45:57.650147 7 log.go:172] (0xc00158d720) (1) Data frame sent I0525 00:45:57.650163 7 log.go:172] (0xc002df6fd0) (0xc00158d720) Stream removed, broadcasting: 1 I0525 00:45:57.650226 7 log.go:172] (0xc002df6fd0) (0xc00158d720) Stream removed, broadcasting: 1 I0525 00:45:57.650240 7 log.go:172] (0xc002df6fd0) (0xc00134e960) Stream removed, broadcasting: 3 I0525 00:45:57.650415 7 log.go:172] (0xc002df6fd0) (0xc00193c3c0) Stream removed, broadcasting: 5 I0525 00:45:57.650450 7 log.go:172] (0xc002df6fd0) Go away received May 25 00:45:57.650: INFO: Exec stderr: "" May 25 00:45:57.650: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6110 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:45:57.650: INFO: >>> kubeConfig: /root/.kube/config I0525 00:45:57.678685 7 log.go:172] (0xc002df7600) (0xc00158d9a0) Create stream I0525 00:45:57.678712 7 log.go:172] (0xc002df7600) (0xc00158d9a0) Stream added, broadcasting: 1 I0525 00:45:57.682380 7 log.go:172] (0xc002df7600) Reply frame received for 1 I0525 00:45:57.682426 7 log.go:172] (0xc002df7600) (0xc0017499a0) Create stream I0525 00:45:57.682442 7 log.go:172] (0xc002df7600) (0xc0017499a0) Stream added, broadcasting: 3 I0525 00:45:57.683583 7 log.go:172] (0xc002df7600) Reply frame received for 3 I0525 00:45:57.683617 7 log.go:172] (0xc002df7600) (0xc001749b80) Create stream I0525 00:45:57.683639 7 log.go:172] (0xc002df7600) (0xc001749b80) Stream added, broadcasting: 5 I0525 00:45:57.684803 7 log.go:172] (0xc002df7600) Reply frame received for 5 I0525 00:45:57.742292 7 log.go:172] (0xc002df7600) Data frame received for 5 I0525 00:45:57.742317 7 log.go:172] (0xc001749b80) (5) Data frame handling I0525 00:45:57.742340 7 log.go:172] (0xc002df7600) Data frame received for 3 I0525 00:45:57.742361 7 log.go:172] (0xc0017499a0) (3) Data frame handling I0525 00:45:57.742369 7 log.go:172] (0xc0017499a0) (3) Data frame sent I0525 00:45:57.742375 7 log.go:172] (0xc002df7600) Data frame received for 3 I0525 00:45:57.742381 7 log.go:172] (0xc0017499a0) (3) Data frame handling I0525 00:45:57.743698 7 log.go:172] (0xc002df7600) Data frame received for 1 I0525 00:45:57.743725 7 log.go:172] (0xc00158d9a0) (1) Data frame handling I0525 00:45:57.743748 7 log.go:172] (0xc00158d9a0) (1) Data frame sent I0525 00:45:57.743766 7 log.go:172] (0xc002df7600) (0xc00158d9a0) Stream removed, broadcasting: 1 I0525 00:45:57.743814 7 log.go:172] (0xc002df7600) Go away received I0525 00:45:57.743902 7 log.go:172] (0xc002df7600) (0xc00158d9a0) Stream removed, broadcasting: 1 I0525 00:45:57.743933 7 log.go:172] (0xc002df7600) (0xc0017499a0) Stream removed, broadcasting: 3 I0525 00:45:57.743957 7 log.go:172] (0xc002df7600) (0xc001749b80) Stream removed, broadcasting: 5 May 25 00:45:57.743: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 25 00:45:57.744: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6110 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:45:57.744: INFO: >>> kubeConfig: /root/.kube/config I0525 00:45:57.775910 7 log.go:172] (0xc002df7c30) (0xc00158de00) Create stream I0525 00:45:57.775945 7 log.go:172] (0xc002df7c30) (0xc00158de00) Stream added, broadcasting: 1 I0525 00:45:57.778340 7 log.go:172] (0xc002df7c30) Reply frame received for 1 I0525 00:45:57.778370 7 log.go:172] (0xc002df7c30) (0xc00134eb40) Create stream I0525 00:45:57.778379 7 log.go:172] (0xc002df7c30) (0xc00134eb40) Stream added, broadcasting: 3 I0525 00:45:57.779178 7 log.go:172] (0xc002df7c30) Reply frame received for 3 I0525 00:45:57.779215 7 log.go:172] (0xc002df7c30) (0xc001749c20) Create stream I0525 00:45:57.779229 7 log.go:172] (0xc002df7c30) (0xc001749c20) Stream added, broadcasting: 5 I0525 00:45:57.780083 7 log.go:172] (0xc002df7c30) Reply frame received for 5 I0525 00:45:57.841698 7 log.go:172] (0xc002df7c30) Data frame received for 5 I0525 00:45:57.841721 7 log.go:172] (0xc001749c20) (5) Data frame handling I0525 00:45:57.841808 7 log.go:172] (0xc002df7c30) Data frame received for 3 I0525 00:45:57.841848 7 log.go:172] (0xc00134eb40) (3) Data frame handling I0525 00:45:57.841877 7 log.go:172] (0xc00134eb40) (3) Data frame sent I0525 00:45:57.841901 7 log.go:172] (0xc002df7c30) Data frame received for 3 I0525 00:45:57.841921 7 log.go:172] (0xc00134eb40) (3) Data frame handling I0525 00:45:57.843690 7 log.go:172] (0xc002df7c30) Data frame received for 1 I0525 00:45:57.843704 7 log.go:172] (0xc00158de00) (1) Data frame handling I0525 00:45:57.843723 7 log.go:172] (0xc00158de00) (1) Data frame sent I0525 00:45:57.843734 7 log.go:172] (0xc002df7c30) (0xc00158de00) Stream removed, broadcasting: 1 I0525 00:45:57.843820 7 log.go:172] (0xc002df7c30) (0xc00158de00) Stream removed, broadcasting: 1 I0525 00:45:57.843837 7 log.go:172] (0xc002df7c30) (0xc00134eb40) Stream removed, broadcasting: 3 I0525 00:45:57.843978 7 log.go:172] (0xc002df7c30) Go away received I0525 00:45:57.844038 7 log.go:172] (0xc002df7c30) (0xc001749c20) Stream removed, broadcasting: 5 May 25 00:45:57.844: INFO: Exec stderr: "" May 25 00:45:57.844: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6110 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:45:57.844: INFO: >>> kubeConfig: /root/.kube/config I0525 00:45:57.873946 7 log.go:172] (0xc001e9efd0) (0xc001749e00) Create stream I0525 00:45:57.873987 7 log.go:172] (0xc001e9efd0) (0xc001749e00) Stream added, broadcasting: 1 I0525 00:45:57.876421 7 log.go:172] (0xc001e9efd0) Reply frame received for 1 I0525 00:45:57.876460 7 log.go:172] (0xc001e9efd0) (0xc00134ebe0) Create stream I0525 00:45:57.876477 7 log.go:172] (0xc001e9efd0) (0xc00134ebe0) Stream added, broadcasting: 3 I0525 00:45:57.877676 7 log.go:172] (0xc001e9efd0) Reply frame received for 3 I0525 00:45:57.877711 7 log.go:172] (0xc001e9efd0) (0xc00134ec80) Create stream I0525 00:45:57.877724 7 log.go:172] (0xc001e9efd0) (0xc00134ec80) Stream added, broadcasting: 5 I0525 00:45:57.878765 7 log.go:172] (0xc001e9efd0) Reply frame received for 5 I0525 00:45:57.949331 7 log.go:172] (0xc001e9efd0) Data frame received for 3 I0525 00:45:57.949471 7 log.go:172] (0xc00134ebe0) (3) Data frame handling I0525 00:45:57.949489 7 log.go:172] (0xc00134ebe0) (3) Data frame sent I0525 00:45:57.949498 7 log.go:172] (0xc001e9efd0) Data frame received for 3 I0525 00:45:57.949503 7 log.go:172] (0xc00134ebe0) (3) Data frame handling I0525 00:45:57.949521 7 log.go:172] (0xc001e9efd0) Data frame received for 5 I0525 00:45:57.949535 7 log.go:172] (0xc00134ec80) (5) Data frame handling I0525 00:45:57.950851 7 log.go:172] (0xc001e9efd0) Data frame received for 1 I0525 00:45:57.950870 7 log.go:172] (0xc001749e00) (1) Data frame handling I0525 00:45:57.950881 7 log.go:172] (0xc001749e00) (1) Data frame sent I0525 00:45:57.950905 7 log.go:172] (0xc001e9efd0) (0xc001749e00) Stream removed, broadcasting: 1 I0525 00:45:57.950964 7 log.go:172] (0xc001e9efd0) Go away received I0525 00:45:57.950993 7 log.go:172] (0xc001e9efd0) (0xc001749e00) Stream removed, broadcasting: 1 I0525 00:45:57.951008 7 log.go:172] (0xc001e9efd0) (0xc00134ebe0) Stream removed, broadcasting: 3 I0525 00:45:57.951022 7 log.go:172] (0xc001e9efd0) (0xc00134ec80) Stream removed, broadcasting: 5 May 25 00:45:57.951: INFO: Exec stderr: "" May 25 00:45:57.951: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6110 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:45:57.951: INFO: >>> kubeConfig: /root/.kube/config I0525 00:45:57.982620 7 log.go:172] (0xc001e9f600) (0xc00187c0a0) Create stream I0525 00:45:57.982656 7 log.go:172] (0xc001e9f600) (0xc00187c0a0) Stream added, broadcasting: 1 I0525 00:45:57.984652 7 log.go:172] (0xc001e9f600) Reply frame received for 1 I0525 00:45:57.984728 7 log.go:172] (0xc001e9f600) (0xc00134edc0) Create stream I0525 00:45:57.984742 7 log.go:172] (0xc001e9f600) (0xc00134edc0) Stream added, broadcasting: 3 I0525 00:45:57.985970 7 log.go:172] (0xc001e9f600) Reply frame received for 3 I0525 00:45:57.986034 7 log.go:172] (0xc001e9f600) (0xc000e9a140) Create stream I0525 00:45:57.986075 7 log.go:172] (0xc001e9f600) (0xc000e9a140) Stream added, broadcasting: 5 I0525 00:45:57.987113 7 log.go:172] (0xc001e9f600) Reply frame received for 5 I0525 00:45:58.058912 7 log.go:172] (0xc001e9f600) Data frame received for 5 I0525 00:45:58.058947 7 log.go:172] (0xc000e9a140) (5) Data frame handling I0525 00:45:58.058967 7 log.go:172] (0xc001e9f600) Data frame received for 3 I0525 00:45:58.058987 7 log.go:172] (0xc00134edc0) (3) Data frame handling I0525 00:45:58.059007 7 log.go:172] (0xc00134edc0) (3) Data frame sent I0525 00:45:58.059015 7 log.go:172] (0xc001e9f600) Data frame received for 3 I0525 00:45:58.059020 7 log.go:172] (0xc00134edc0) (3) Data frame handling I0525 00:45:58.060316 7 log.go:172] (0xc001e9f600) Data frame received for 1 I0525 00:45:58.060332 7 log.go:172] (0xc00187c0a0) (1) Data frame handling I0525 00:45:58.060357 7 log.go:172] (0xc00187c0a0) (1) Data frame sent I0525 00:45:58.060374 7 log.go:172] (0xc001e9f600) (0xc00187c0a0) Stream removed, broadcasting: 1 I0525 00:45:58.060388 7 log.go:172] (0xc001e9f600) Go away received I0525 00:45:58.060482 7 log.go:172] (0xc001e9f600) (0xc00187c0a0) Stream removed, broadcasting: 1 I0525 00:45:58.060499 7 log.go:172] (0xc001e9f600) (0xc00134edc0) Stream removed, broadcasting: 3 I0525 00:45:58.060507 7 log.go:172] (0xc001e9f600) (0xc000e9a140) Stream removed, broadcasting: 5 May 25 00:45:58.060: INFO: Exec stderr: "" May 25 00:45:58.060: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6110 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:45:58.060: INFO: >>> kubeConfig: /root/.kube/config I0525 00:45:58.095629 7 log.go:172] (0xc002ea82c0) (0xc000e9a500) Create stream I0525 00:45:58.095670 7 log.go:172] (0xc002ea82c0) (0xc000e9a500) Stream added, broadcasting: 1 I0525 00:45:58.097959 7 log.go:172] (0xc002ea82c0) Reply frame received for 1 I0525 00:45:58.098011 7 log.go:172] (0xc002ea82c0) (0xc000e9a5a0) Create stream I0525 00:45:58.098030 7 log.go:172] (0xc002ea82c0) (0xc000e9a5a0) Stream added, broadcasting: 3 I0525 00:45:58.098880 7 log.go:172] (0xc002ea82c0) Reply frame received for 3 I0525 00:45:58.098914 7 log.go:172] (0xc002ea82c0) (0xc00187c140) Create stream I0525 00:45:58.098929 7 log.go:172] (0xc002ea82c0) (0xc00187c140) Stream added, broadcasting: 5 I0525 00:45:58.099764 7 log.go:172] (0xc002ea82c0) Reply frame received for 5 I0525 00:45:58.149759 7 log.go:172] (0xc002ea82c0) Data frame received for 5 I0525 00:45:58.149810 7 log.go:172] (0xc00187c140) (5) Data frame handling I0525 00:45:58.149838 7 log.go:172] (0xc002ea82c0) Data frame received for 3 I0525 00:45:58.149854 7 log.go:172] (0xc000e9a5a0) (3) Data frame handling I0525 00:45:58.149879 7 log.go:172] (0xc000e9a5a0) (3) Data frame sent I0525 00:45:58.149896 7 log.go:172] (0xc002ea82c0) Data frame received for 3 I0525 00:45:58.149908 7 log.go:172] (0xc000e9a5a0) (3) Data frame handling I0525 00:45:58.151592 7 log.go:172] (0xc002ea82c0) Data frame received for 1 I0525 00:45:58.151620 7 log.go:172] (0xc000e9a500) (1) Data frame handling I0525 00:45:58.151640 7 log.go:172] (0xc000e9a500) (1) Data frame sent I0525 00:45:58.151657 7 log.go:172] (0xc002ea82c0) (0xc000e9a500) Stream removed, broadcasting: 1 I0525 00:45:58.151675 7 log.go:172] (0xc002ea82c0) Go away received I0525 00:45:58.151911 7 log.go:172] (0xc002ea82c0) (0xc000e9a500) Stream removed, broadcasting: 1 I0525 00:45:58.151942 7 log.go:172] (0xc002ea82c0) (0xc000e9a5a0) Stream removed, broadcasting: 3 I0525 00:45:58.151957 7 log.go:172] (0xc002ea82c0) (0xc00187c140) Stream removed, broadcasting: 5 May 25 00:45:58.151: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:45:58.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6110" for this suite. • [SLOW TEST:11.342 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":201,"skipped":3172,"failed":0} S ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:45:58.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 25 00:46:02.804: INFO: Successfully updated pod "annotationupdate225da729-1710-4f6d-813e-759978cbd91c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:46:04.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5684" for this suite. • [SLOW TEST:6.713 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":202,"skipped":3173,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:46:04.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-hzgd STEP: Creating a pod to test atomic-volume-subpath May 25 00:46:05.018: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hzgd" in namespace "subpath-6431" to be "Succeeded or Failed" May 25 00:46:05.024: INFO: Pod "pod-subpath-test-configmap-hzgd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134259ms May 25 00:46:07.028: INFO: Pod "pod-subpath-test-configmap-hzgd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010504067s May 25 00:46:09.032: INFO: Pod "pod-subpath-test-configmap-hzgd": Phase="Running", Reason="", readiness=true. Elapsed: 4.014470262s May 25 00:46:11.037: INFO: Pod "pod-subpath-test-configmap-hzgd": Phase="Running", Reason="", readiness=true. Elapsed: 6.019149271s May 25 00:46:13.042: INFO: Pod "pod-subpath-test-configmap-hzgd": Phase="Running", Reason="", readiness=true. Elapsed: 8.023764861s May 25 00:46:15.046: INFO: Pod "pod-subpath-test-configmap-hzgd": Phase="Running", Reason="", readiness=true. Elapsed: 10.027695295s May 25 00:46:17.050: INFO: Pod "pod-subpath-test-configmap-hzgd": Phase="Running", Reason="", readiness=true. Elapsed: 12.031814845s May 25 00:46:19.054: INFO: Pod "pod-subpath-test-configmap-hzgd": Phase="Running", Reason="", readiness=true. Elapsed: 14.036284682s May 25 00:46:21.058: INFO: Pod "pod-subpath-test-configmap-hzgd": Phase="Running", Reason="", readiness=true. Elapsed: 16.040047517s May 25 00:46:23.062: INFO: Pod "pod-subpath-test-configmap-hzgd": Phase="Running", Reason="", readiness=true. Elapsed: 18.043758006s May 25 00:46:25.066: INFO: Pod "pod-subpath-test-configmap-hzgd": Phase="Running", Reason="", readiness=true. Elapsed: 20.047970685s May 25 00:46:27.070: INFO: Pod "pod-subpath-test-configmap-hzgd": Phase="Running", Reason="", readiness=true. Elapsed: 22.051630886s May 25 00:46:29.074: INFO: Pod "pod-subpath-test-configmap-hzgd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.055783885s STEP: Saw pod success May 25 00:46:29.074: INFO: Pod "pod-subpath-test-configmap-hzgd" satisfied condition "Succeeded or Failed" May 25 00:46:29.077: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-hzgd container test-container-subpath-configmap-hzgd: STEP: delete the pod May 25 00:46:29.143: INFO: Waiting for pod pod-subpath-test-configmap-hzgd to disappear May 25 00:46:29.265: INFO: Pod pod-subpath-test-configmap-hzgd no longer exists STEP: Deleting pod pod-subpath-test-configmap-hzgd May 25 00:46:29.265: INFO: Deleting pod "pod-subpath-test-configmap-hzgd" in namespace "subpath-6431" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:46:29.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6431" for this suite. • [SLOW TEST:24.401 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":288,"completed":203,"skipped":3175,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:46:29.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:46:29.445: INFO: The status of Pod test-webserver-e49f9c9a-1e2c-4726-9463-327fed71a588 is Pending, waiting for it to be Running (with Ready = true) May 25 00:46:31.450: INFO: The status of Pod test-webserver-e49f9c9a-1e2c-4726-9463-327fed71a588 is Pending, waiting for it to be Running (with Ready = true) May 25 00:46:33.450: INFO: The status of Pod test-webserver-e49f9c9a-1e2c-4726-9463-327fed71a588 is Running (Ready = false) May 25 00:46:35.631: INFO: The status of Pod test-webserver-e49f9c9a-1e2c-4726-9463-327fed71a588 is Running (Ready = false) May 25 00:46:37.449: INFO: The status of Pod test-webserver-e49f9c9a-1e2c-4726-9463-327fed71a588 is Running (Ready = false) May 25 00:46:39.450: INFO: The status of Pod test-webserver-e49f9c9a-1e2c-4726-9463-327fed71a588 is Running (Ready = false) May 25 00:46:41.449: INFO: The status of Pod test-webserver-e49f9c9a-1e2c-4726-9463-327fed71a588 is Running (Ready = false) May 25 00:46:43.449: INFO: The status of Pod test-webserver-e49f9c9a-1e2c-4726-9463-327fed71a588 is Running (Ready = false) May 25 00:46:45.450: INFO: The status of Pod test-webserver-e49f9c9a-1e2c-4726-9463-327fed71a588 is Running (Ready = false) May 25 00:46:47.450: INFO: The status of Pod test-webserver-e49f9c9a-1e2c-4726-9463-327fed71a588 is Running (Ready = false) May 25 00:46:49.450: INFO: The status of Pod test-webserver-e49f9c9a-1e2c-4726-9463-327fed71a588 is Running (Ready = true) May 25 00:46:49.453: INFO: Container started at 2020-05-25 00:46:32 +0000 UTC, pod became ready at 2020-05-25 00:46:48 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:46:49.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7107" for this suite. • [SLOW TEST:20.187 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":288,"completed":204,"skipped":3182,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:46:49.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4790.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4790.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4790.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4790.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-4790.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4790.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 00:46:55.722: INFO: DNS probes using dns-4790/dns-test-cca78a86-477f-4ffe-83e0-20c0c3982dda succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:46:56.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4790" for this suite. • [SLOW TEST:6.706 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":288,"completed":205,"skipped":3195,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:46:56.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 25 00:46:56.769: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5174 /api/v1/namespaces/watch-5174/configmaps/e2e-watch-test-resource-version b874512c-8ebb-4420-ae98-28414fe8cb57 7428139 0 2020-05-25 00:46:56 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-25 00:46:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 25 00:46:56.769: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5174 /api/v1/namespaces/watch-5174/configmaps/e2e-watch-test-resource-version b874512c-8ebb-4420-ae98-28414fe8cb57 7428141 0 2020-05-25 00:46:56 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-25 00:46:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:46:56.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5174" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":288,"completed":206,"skipped":3214,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:46:56.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-3f1e7b89-3564-44e1-b405-12da90880d7f STEP: Creating a pod to test consume configMaps May 25 00:46:56.923: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fe787096-e27b-40fe-a764-93860141868f" in namespace "projected-8290" to be "Succeeded or Failed" May 25 00:46:56.989: INFO: Pod "pod-projected-configmaps-fe787096-e27b-40fe-a764-93860141868f": Phase="Pending", Reason="", readiness=false. Elapsed: 66.154165ms May 25 00:46:59.025: INFO: Pod "pod-projected-configmaps-fe787096-e27b-40fe-a764-93860141868f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102665489s May 25 00:47:01.030: INFO: Pod "pod-projected-configmaps-fe787096-e27b-40fe-a764-93860141868f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107212185s STEP: Saw pod success May 25 00:47:01.030: INFO: Pod "pod-projected-configmaps-fe787096-e27b-40fe-a764-93860141868f" satisfied condition "Succeeded or Failed" May 25 00:47:01.033: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-fe787096-e27b-40fe-a764-93860141868f container projected-configmap-volume-test: STEP: delete the pod May 25 00:47:01.064: INFO: Waiting for pod pod-projected-configmaps-fe787096-e27b-40fe-a764-93860141868f to disappear May 25 00:47:01.289: INFO: Pod pod-projected-configmaps-fe787096-e27b-40fe-a764-93860141868f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:47:01.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8290" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":207,"skipped":3247,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:47:01.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 25 00:47:05.478: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-6369 PodName:var-expansion-aa9a0783-13f9-4d67-884d-c5106db0ed40 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:47:05.478: INFO: >>> kubeConfig: /root/.kube/config I0525 00:47:05.512652 7 log.go:172] (0xc002ceadc0) (0xc002d13180) Create stream I0525 00:47:05.512681 7 log.go:172] (0xc002ceadc0) (0xc002d13180) Stream added, broadcasting: 1 I0525 00:47:05.514888 7 log.go:172] (0xc002ceadc0) Reply frame received for 1 I0525 00:47:05.514935 7 log.go:172] (0xc002ceadc0) (0xc002bea000) Create stream I0525 00:47:05.514950 7 log.go:172] (0xc002ceadc0) (0xc002bea000) Stream added, broadcasting: 3 I0525 00:47:05.516128 7 log.go:172] (0xc002ceadc0) Reply frame received for 3 I0525 00:47:05.516173 7 log.go:172] (0xc002ceadc0) (0xc000d45b80) Create stream I0525 00:47:05.516198 7 log.go:172] (0xc002ceadc0) (0xc000d45b80) Stream added, broadcasting: 5 I0525 00:47:05.517468 7 log.go:172] (0xc002ceadc0) Reply frame received for 5 I0525 00:47:05.592345 7 log.go:172] (0xc002ceadc0) Data frame received for 5 I0525 00:47:05.592380 7 log.go:172] (0xc000d45b80) (5) Data frame handling I0525 00:47:05.592400 7 log.go:172] (0xc002ceadc0) Data frame received for 3 I0525 00:47:05.592411 7 log.go:172] (0xc002bea000) (3) Data frame handling I0525 00:47:05.593790 7 log.go:172] (0xc002ceadc0) Data frame received for 1 I0525 00:47:05.593814 7 log.go:172] (0xc002d13180) (1) Data frame handling I0525 00:47:05.593824 7 log.go:172] (0xc002d13180) (1) Data frame sent I0525 00:47:05.593837 7 log.go:172] (0xc002ceadc0) (0xc002d13180) Stream removed, broadcasting: 1 I0525 00:47:05.593930 7 log.go:172] (0xc002ceadc0) Go away received I0525 00:47:05.593992 7 log.go:172] (0xc002ceadc0) (0xc002d13180) Stream removed, broadcasting: 1 I0525 00:47:05.594028 7 log.go:172] (0xc002ceadc0) (0xc002bea000) Stream removed, broadcasting: 3 I0525 00:47:05.594041 7 log.go:172] (0xc002ceadc0) (0xc000d45b80) Stream removed, broadcasting: 5 STEP: test for file in mounted path May 25 00:47:05.631: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-6369 PodName:var-expansion-aa9a0783-13f9-4d67-884d-c5106db0ed40 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:47:05.631: INFO: >>> kubeConfig: /root/.kube/config I0525 00:47:05.664708 7 log.go:172] (0xc002df64d0) (0xc0028ea3c0) Create stream I0525 00:47:05.664738 7 log.go:172] (0xc002df64d0) (0xc0028ea3c0) Stream added, broadcasting: 1 I0525 00:47:05.666821 7 log.go:172] (0xc002df64d0) Reply frame received for 1 I0525 00:47:05.666883 7 log.go:172] (0xc002df64d0) (0xc002d13360) Create stream I0525 00:47:05.666923 7 log.go:172] (0xc002df64d0) (0xc002d13360) Stream added, broadcasting: 3 I0525 00:47:05.668048 7 log.go:172] (0xc002df64d0) Reply frame received for 3 I0525 00:47:05.668093 7 log.go:172] (0xc002df64d0) (0xc000d45c20) Create stream I0525 00:47:05.668107 7 log.go:172] (0xc002df64d0) (0xc000d45c20) Stream added, broadcasting: 5 I0525 00:47:05.668947 7 log.go:172] (0xc002df64d0) Reply frame received for 5 I0525 00:47:05.731411 7 log.go:172] (0xc002df64d0) Data frame received for 3 I0525 00:47:05.731442 7 log.go:172] (0xc002d13360) (3) Data frame handling I0525 00:47:05.731466 7 log.go:172] (0xc002df64d0) Data frame received for 5 I0525 00:47:05.731485 7 log.go:172] (0xc000d45c20) (5) Data frame handling I0525 00:47:05.732556 7 log.go:172] (0xc002df64d0) Data frame received for 1 I0525 00:47:05.732583 7 log.go:172] (0xc0028ea3c0) (1) Data frame handling I0525 00:47:05.732597 7 log.go:172] (0xc0028ea3c0) (1) Data frame sent I0525 00:47:05.732608 7 log.go:172] (0xc002df64d0) (0xc0028ea3c0) Stream removed, broadcasting: 1 I0525 00:47:05.732620 7 log.go:172] (0xc002df64d0) Go away received I0525 00:47:05.732714 7 log.go:172] (0xc002df64d0) (0xc0028ea3c0) Stream removed, broadcasting: 1 I0525 00:47:05.732733 7 log.go:172] (0xc002df64d0) (0xc002d13360) Stream removed, broadcasting: 3 I0525 00:47:05.732741 7 log.go:172] (0xc002df64d0) (0xc000d45c20) Stream removed, broadcasting: 5 STEP: updating the annotation value May 25 00:47:06.242: INFO: Successfully updated pod "var-expansion-aa9a0783-13f9-4d67-884d-c5106db0ed40" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 25 00:47:06.272: INFO: Deleting pod "var-expansion-aa9a0783-13f9-4d67-884d-c5106db0ed40" in namespace "var-expansion-6369" May 25 00:47:06.278: INFO: Wait up to 5m0s for pod "var-expansion-aa9a0783-13f9-4d67-884d-c5106db0ed40" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:47:46.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6369" for this suite. • [SLOW TEST:44.987 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":288,"completed":208,"skipped":3261,"failed":0} SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:47:46.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:48:01.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7786" for this suite. STEP: Destroying namespace "nsdeletetest-7173" for this suite. May 25 00:48:01.630: INFO: Namespace nsdeletetest-7173 was already deleted STEP: Destroying namespace "nsdeletetest-1037" for this suite. • [SLOW TEST:15.304 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":288,"completed":209,"skipped":3263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:48:01.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-a5881a7b-f1bb-4a96-ad58-2cd40a07d6cc STEP: Creating a pod to test consume secrets May 25 00:48:01.843: INFO: Waiting up to 5m0s for pod "pod-secrets-17d7f46a-939f-4638-a676-10950ddcf6f4" in namespace "secrets-4673" to be "Succeeded or Failed" May 25 00:48:01.864: INFO: Pod "pod-secrets-17d7f46a-939f-4638-a676-10950ddcf6f4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.374648ms May 25 00:48:03.868: INFO: Pod "pod-secrets-17d7f46a-939f-4638-a676-10950ddcf6f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025146225s May 25 00:48:05.874: INFO: Pod "pod-secrets-17d7f46a-939f-4638-a676-10950ddcf6f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030432785s STEP: Saw pod success May 25 00:48:05.874: INFO: Pod "pod-secrets-17d7f46a-939f-4638-a676-10950ddcf6f4" satisfied condition "Succeeded or Failed" May 25 00:48:05.877: INFO: Trying to get logs from node latest-worker pod pod-secrets-17d7f46a-939f-4638-a676-10950ddcf6f4 container secret-env-test: STEP: delete the pod May 25 00:48:05.910: INFO: Waiting for pod pod-secrets-17d7f46a-939f-4638-a676-10950ddcf6f4 to disappear May 25 00:48:06.001: INFO: Pod pod-secrets-17d7f46a-939f-4638-a676-10950ddcf6f4 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:48:06.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4673" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":288,"completed":210,"skipped":3294,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:48:06.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 25 00:48:06.073: INFO: Waiting up to 5m0s for pod "pod-0cb50c07-9385-4b9a-93d9-07ef64bcac36" in namespace "emptydir-9846" to be "Succeeded or Failed" May 25 00:48:06.076: INFO: Pod "pod-0cb50c07-9385-4b9a-93d9-07ef64bcac36": Phase="Pending", Reason="", readiness=false. Elapsed: 3.625788ms May 25 00:48:08.080: INFO: Pod "pod-0cb50c07-9385-4b9a-93d9-07ef64bcac36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007385028s May 25 00:48:10.084: INFO: Pod "pod-0cb50c07-9385-4b9a-93d9-07ef64bcac36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011178953s STEP: Saw pod success May 25 00:48:10.084: INFO: Pod "pod-0cb50c07-9385-4b9a-93d9-07ef64bcac36" satisfied condition "Succeeded or Failed" May 25 00:48:10.087: INFO: Trying to get logs from node latest-worker pod pod-0cb50c07-9385-4b9a-93d9-07ef64bcac36 container test-container: STEP: delete the pod May 25 00:48:10.125: INFO: Waiting for pod pod-0cb50c07-9385-4b9a-93d9-07ef64bcac36 to disappear May 25 00:48:10.157: INFO: Pod pod-0cb50c07-9385-4b9a-93d9-07ef64bcac36 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:48:10.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9846" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":211,"skipped":3305,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:48:10.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:48:10.327: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 25 00:48:10.342: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:10.346: INFO: Number of nodes with available pods: 0 May 25 00:48:10.346: INFO: Node latest-worker is running more than one daemon pod May 25 00:48:11.351: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:11.354: INFO: Number of nodes with available pods: 0 May 25 00:48:11.354: INFO: Node latest-worker is running more than one daemon pod May 25 00:48:12.352: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:12.355: INFO: Number of nodes with available pods: 0 May 25 00:48:12.355: INFO: Node latest-worker is running more than one daemon pod May 25 00:48:13.446: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:13.450: INFO: Number of nodes with available pods: 0 May 25 00:48:13.450: INFO: Node latest-worker is running more than one daemon pod May 25 00:48:14.351: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:14.355: INFO: Number of nodes with available pods: 2 May 25 00:48:14.355: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 25 00:48:14.433: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:14.434: INFO: Wrong image for pod: daemon-set-lmc94. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:14.488: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:15.492: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:15.492: INFO: Wrong image for pod: daemon-set-lmc94. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:15.496: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:16.494: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:16.494: INFO: Wrong image for pod: daemon-set-lmc94. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:16.498: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:17.492: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:17.492: INFO: Wrong image for pod: daemon-set-lmc94. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:17.496: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:18.494: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:18.494: INFO: Wrong image for pod: daemon-set-lmc94. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:18.494: INFO: Pod daemon-set-lmc94 is not available May 25 00:48:18.499: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:19.493: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:19.493: INFO: Wrong image for pod: daemon-set-lmc94. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:19.493: INFO: Pod daemon-set-lmc94 is not available May 25 00:48:19.498: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:20.494: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:20.494: INFO: Wrong image for pod: daemon-set-lmc94. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:20.494: INFO: Pod daemon-set-lmc94 is not available May 25 00:48:20.498: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:21.497: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:21.497: INFO: Wrong image for pod: daemon-set-lmc94. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:21.497: INFO: Pod daemon-set-lmc94 is not available May 25 00:48:21.501: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:22.494: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:22.494: INFO: Wrong image for pod: daemon-set-lmc94. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:22.494: INFO: Pod daemon-set-lmc94 is not available May 25 00:48:22.499: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:23.492: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:23.492: INFO: Wrong image for pod: daemon-set-lmc94. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:23.492: INFO: Pod daemon-set-lmc94 is not available May 25 00:48:23.496: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:24.494: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:24.494: INFO: Wrong image for pod: daemon-set-lmc94. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:24.494: INFO: Pod daemon-set-lmc94 is not available May 25 00:48:24.498: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:25.493: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:25.494: INFO: Pod daemon-set-qvm5d is not available May 25 00:48:25.497: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:26.494: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:26.494: INFO: Pod daemon-set-qvm5d is not available May 25 00:48:26.499: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:27.493: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:27.493: INFO: Pod daemon-set-qvm5d is not available May 25 00:48:27.497: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:28.493: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:28.497: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:29.493: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:29.497: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:30.493: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:30.493: INFO: Pod daemon-set-g57k9 is not available May 25 00:48:30.498: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:31.494: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:31.494: INFO: Pod daemon-set-g57k9 is not available May 25 00:48:31.499: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:32.494: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:32.494: INFO: Pod daemon-set-g57k9 is not available May 25 00:48:32.499: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:33.493: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:33.493: INFO: Pod daemon-set-g57k9 is not available May 25 00:48:33.497: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:34.494: INFO: Wrong image for pod: daemon-set-g57k9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 00:48:34.494: INFO: Pod daemon-set-g57k9 is not available May 25 00:48:34.498: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:35.494: INFO: Pod daemon-set-fjzw5 is not available May 25 00:48:35.499: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 25 00:48:35.504: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:35.507: INFO: Number of nodes with available pods: 1 May 25 00:48:35.507: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:48:36.514: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:36.518: INFO: Number of nodes with available pods: 1 May 25 00:48:36.518: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:48:37.512: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:37.514: INFO: Number of nodes with available pods: 1 May 25 00:48:37.514: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:48:38.514: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:38.517: INFO: Number of nodes with available pods: 1 May 25 00:48:38.517: INFO: Node latest-worker2 is running more than one daemon pod May 25 00:48:39.532: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 00:48:39.535: INFO: Number of nodes with available pods: 2 May 25 00:48:39.535: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9533, will wait for the garbage collector to delete the pods May 25 00:48:39.631: INFO: Deleting DaemonSet.extensions daemon-set took: 7.0327ms May 25 00:48:40.031: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.267701ms May 25 00:48:45.343: INFO: Number of nodes with available pods: 0 May 25 00:48:45.343: INFO: Number of running nodes: 0, number of available pods: 0 May 25 00:48:45.347: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9533/daemonsets","resourceVersion":"7428715"},"items":null} May 25 00:48:45.349: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9533/pods","resourceVersion":"7428715"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:48:45.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9533" for this suite. • [SLOW TEST:35.181 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":288,"completed":212,"skipped":3333,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:48:45.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-166a16b1-d79c-43fe-9ae4-3545e62fa8b0 May 25 00:48:45.438: INFO: Pod name my-hostname-basic-166a16b1-d79c-43fe-9ae4-3545e62fa8b0: Found 0 pods out of 1 May 25 00:48:50.457: INFO: Pod name my-hostname-basic-166a16b1-d79c-43fe-9ae4-3545e62fa8b0: Found 1 pods out of 1 May 25 00:48:50.457: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-166a16b1-d79c-43fe-9ae4-3545e62fa8b0" are running May 25 00:48:50.465: INFO: Pod "my-hostname-basic-166a16b1-d79c-43fe-9ae4-3545e62fa8b0-ghbjc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-25 00:48:45 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-25 00:48:48 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-25 00:48:48 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-25 00:48:45 +0000 UTC Reason: Message:}]) May 25 00:48:50.465: INFO: Trying to dial the pod May 25 00:48:55.478: INFO: Controller my-hostname-basic-166a16b1-d79c-43fe-9ae4-3545e62fa8b0: Got expected result from replica 1 [my-hostname-basic-166a16b1-d79c-43fe-9ae4-3545e62fa8b0-ghbjc]: "my-hostname-basic-166a16b1-d79c-43fe-9ae4-3545e62fa8b0-ghbjc", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:48:55.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1769" for this suite. • [SLOW TEST:10.118 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":213,"skipped":3353,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:48:55.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 25 00:48:55.578: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 00:48:55.598: INFO: Waiting for terminating namespaces to be deleted... May 25 00:48:55.601: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 25 00:48:55.607: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 25 00:48:55.607: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 25 00:48:55.607: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 25 00:48:55.607: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 25 00:48:55.607: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 25 00:48:55.607: INFO: Container kindnet-cni ready: true, restart count 0 May 25 00:48:55.607: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 25 00:48:55.607: INFO: Container kube-proxy ready: true, restart count 0 May 25 00:48:55.607: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 25 00:48:55.612: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 25 00:48:55.612: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 25 00:48:55.612: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 25 00:48:55.612: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 25 00:48:55.612: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 25 00:48:55.612: INFO: Container kindnet-cni ready: true, restart count 0 May 25 00:48:55.612: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 25 00:48:55.612: INFO: Container kube-proxy ready: true, restart count 0 May 25 00:48:55.612: INFO: my-hostname-basic-166a16b1-d79c-43fe-9ae4-3545e62fa8b0-ghbjc from replication-controller-1769 started at 2020-05-25 00:48:45 +0000 UTC (1 container statuses recorded) May 25 00:48:55.612: INFO: Container my-hostname-basic-166a16b1-d79c-43fe-9ae4-3545e62fa8b0 ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e464ae82-341f-4783-afb8-cf0ff4543906 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-e464ae82-341f-4783-afb8-cf0ff4543906 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-e464ae82-341f-4783-afb8-cf0ff4543906 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:54:03.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8680" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.340 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":288,"completed":214,"skipped":3375,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:54:03.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 00:54:04.367: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 00:54:06.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725964844, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725964844, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725964844, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725964844, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 00:54:08.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725964844, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725964844, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725964844, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725964844, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 00:54:11.459: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 25 00:54:15.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-7323 to-be-attached-pod -i -c=container1' May 25 00:54:18.600: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:54:18.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7323" for this suite. STEP: Destroying namespace "webhook-7323-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.901 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":288,"completed":215,"skipped":3388,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:54:18.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 25 00:54:18.803: INFO: Created pod &Pod{ObjectMeta:{dns-4078 dns-4078 /api/v1/namespaces/dns-4078/pods/dns-4078 3a60549a-1066-4b1e-8e13-a9f33d8942b9 7429851 0 2020-05-25 00:54:18 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-25 00:54:18 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sjq99,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sjq99,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sjq99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 00:54:18.811: INFO: The status of Pod dns-4078 is Pending, waiting for it to be Running (with Ready = true) May 25 00:54:20.815: INFO: The status of Pod dns-4078 is Pending, waiting for it to be Running (with Ready = true) May 25 00:54:22.815: INFO: The status of Pod dns-4078 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 25 00:54:22.815: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4078 PodName:dns-4078 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 00:54:22.815: INFO: >>> kubeConfig: /root/.kube/config I0525 00:54:22.875914 7 log.go:172] (0xc002df6580) (0xc00134e5a0) Create stream I0525 00:54:22.875946 7 log.go:172] (0xc002df6580) (0xc00134e5a0) Stream added, broadcasting: 1 I0525 00:54:22.877486 7 log.go:172] (0xc002df6580) Reply frame received for 1 I0525 00:54:22.877516 7 log.go:172] (0xc002df6580) (0xc001968000) Create stream I0525 00:54:22.877526 7 log.go:172] (0xc002df6580) (0xc001968000) Stream added, broadcasting: 3 I0525 00:54:22.878252 7 log.go:172] (0xc002df6580) Reply frame received for 3 I0525 00:54:22.878277 7 log.go:172] (0xc002df6580) (0xc00134e960) Create stream I0525 00:54:22.878289 7 log.go:172] (0xc002df6580) (0xc00134e960) Stream added, broadcasting: 5 I0525 00:54:22.878977 7 log.go:172] (0xc002df6580) Reply frame received for 5 I0525 00:54:22.974076 7 log.go:172] (0xc002df6580) Data frame received for 3 I0525 00:54:22.974104 7 log.go:172] (0xc001968000) (3) Data frame handling I0525 00:54:22.974117 7 log.go:172] (0xc001968000) (3) Data frame sent I0525 00:54:22.976636 7 log.go:172] (0xc002df6580) Data frame received for 5 I0525 00:54:22.976667 7 log.go:172] (0xc00134e960) (5) Data frame handling I0525 00:54:22.976747 7 log.go:172] (0xc002df6580) Data frame received for 3 I0525 00:54:22.976782 7 log.go:172] (0xc001968000) (3) Data frame handling I0525 00:54:22.978704 7 log.go:172] (0xc002df6580) Data frame received for 1 I0525 00:54:22.978730 7 log.go:172] (0xc00134e5a0) (1) Data frame handling I0525 00:54:22.978758 7 log.go:172] (0xc00134e5a0) (1) Data frame sent I0525 00:54:22.978791 7 log.go:172] (0xc002df6580) (0xc00134e5a0) Stream removed, broadcasting: 1 I0525 00:54:22.978891 7 log.go:172] (0xc002df6580) (0xc00134e5a0) Stream removed, broadcasting: 1 I0525 00:54:22.978929 7 log.go:172] (0xc002df6580) (0xc001968000) Stream removed, broadcasting: 3 I0525 00:54:22.978953 7 log.go:172] (0xc002df6580) (0xc00134e960) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 25 00:54:22.978: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4078 PodName:dns-4078 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} I0525 00:54:22.979011 7 log.go:172] (0xc002df6580) Go away received May 25 00:54:22.979: INFO: >>> kubeConfig: /root/.kube/config I0525 00:54:23.009620 7 log.go:172] (0xc002c0e4d0) (0xc00187c1e0) Create stream I0525 00:54:23.009654 7 log.go:172] (0xc002c0e4d0) (0xc00187c1e0) Stream added, broadcasting: 1 I0525 00:54:23.011246 7 log.go:172] (0xc002c0e4d0) Reply frame received for 1 I0525 00:54:23.011284 7 log.go:172] (0xc002c0e4d0) (0xc00121adc0) Create stream I0525 00:54:23.011293 7 log.go:172] (0xc002c0e4d0) (0xc00121adc0) Stream added, broadcasting: 3 I0525 00:54:23.012061 7 log.go:172] (0xc002c0e4d0) Reply frame received for 3 I0525 00:54:23.012092 7 log.go:172] (0xc002c0e4d0) (0xc00187c280) Create stream I0525 00:54:23.012102 7 log.go:172] (0xc002c0e4d0) (0xc00187c280) Stream added, broadcasting: 5 I0525 00:54:23.012813 7 log.go:172] (0xc002c0e4d0) Reply frame received for 5 I0525 00:54:23.075850 7 log.go:172] (0xc002c0e4d0) Data frame received for 3 I0525 00:54:23.075882 7 log.go:172] (0xc00121adc0) (3) Data frame handling I0525 00:54:23.075913 7 log.go:172] (0xc00121adc0) (3) Data frame sent I0525 00:54:23.077361 7 log.go:172] (0xc002c0e4d0) Data frame received for 3 I0525 00:54:23.077397 7 log.go:172] (0xc00121adc0) (3) Data frame handling I0525 00:54:23.077435 7 log.go:172] (0xc002c0e4d0) Data frame received for 5 I0525 00:54:23.077454 7 log.go:172] (0xc00187c280) (5) Data frame handling I0525 00:54:23.079000 7 log.go:172] (0xc002c0e4d0) Data frame received for 1 I0525 00:54:23.079035 7 log.go:172] (0xc00187c1e0) (1) Data frame handling I0525 00:54:23.079068 7 log.go:172] (0xc00187c1e0) (1) Data frame sent I0525 00:54:23.079089 7 log.go:172] (0xc002c0e4d0) (0xc00187c1e0) Stream removed, broadcasting: 1 I0525 00:54:23.079112 7 log.go:172] (0xc002c0e4d0) Go away received I0525 00:54:23.079223 7 log.go:172] (0xc002c0e4d0) (0xc00187c1e0) Stream removed, broadcasting: 1 I0525 00:54:23.079252 7 log.go:172] (0xc002c0e4d0) (0xc00121adc0) Stream removed, broadcasting: 3 I0525 00:54:23.079268 7 log.go:172] (0xc002c0e4d0) (0xc00187c280) Stream removed, broadcasting: 5 May 25 00:54:23.079: INFO: Deleting pod dns-4078... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:54:23.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4078" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":288,"completed":216,"skipped":3431,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:54:23.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:54:34.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3222" for this suite. • [SLOW TEST:11.652 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":288,"completed":217,"skipped":3435,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:54:34.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:54:39.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6282" for this suite. • [SLOW TEST:5.009 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":288,"completed":218,"skipped":3466,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:54:39.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 STEP: creating an pod May 25 00:54:39.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-3072 -- logs-generator --log-lines-total 100 --run-duration 20s' May 25 00:54:40.079: INFO: stderr: "" May 25 00:54:40.079: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. May 25 00:54:40.079: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 25 00:54:40.079: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3072" to be "running and ready, or succeeded" May 25 00:54:40.102: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 22.14463ms May 25 00:54:42.105: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026029654s May 25 00:54:44.110: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.030241381s May 25 00:54:44.110: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 25 00:54:44.110: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 25 00:54:44.110: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3072' May 25 00:54:44.234: INFO: stderr: "" May 25 00:54:44.234: INFO: stdout: "I0525 00:54:42.623621 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/k427 393\nI0525 00:54:42.823919 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/z2w 383\nI0525 00:54:43.023805 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/t77z 562\nI0525 00:54:43.223777 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/t4p8 260\nI0525 00:54:43.423804 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/lcm6 571\nI0525 00:54:43.623800 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/qgw8 527\nI0525 00:54:43.823833 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/66hk 280\nI0525 00:54:44.023830 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/j8b 564\nI0525 00:54:44.223750 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/b5x4 303\n" STEP: limiting log lines May 25 00:54:44.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3072 --tail=1' May 25 00:54:44.346: INFO: stderr: "" May 25 00:54:44.346: INFO: stdout: "I0525 00:54:44.223750 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/b5x4 303\n" May 25 00:54:44.346: INFO: got output "I0525 00:54:44.223750 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/b5x4 303\n" STEP: limiting log bytes May 25 00:54:44.346: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3072 --limit-bytes=1' May 25 00:54:44.462: INFO: stderr: "" May 25 00:54:44.462: INFO: stdout: "I" May 25 00:54:44.462: INFO: got output "I" STEP: exposing timestamps May 25 00:54:44.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3072 --tail=1 --timestamps' May 25 00:54:44.577: INFO: stderr: "" May 25 00:54:44.577: INFO: stdout: "2020-05-25T00:54:44.424065502Z I0525 00:54:44.423863 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/x2q4 306\n" May 25 00:54:44.577: INFO: got output "2020-05-25T00:54:44.424065502Z I0525 00:54:44.423863 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/x2q4 306\n" STEP: restricting to a time range May 25 00:54:47.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3072 --since=1s' May 25 00:54:47.190: INFO: stderr: "" May 25 00:54:47.190: INFO: stdout: "I0525 00:54:46.223779 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/n7f 415\nI0525 00:54:46.423850 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/8v9 200\nI0525 00:54:46.623826 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/7cm 296\nI0525 00:54:46.823796 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/cxcn 398\nI0525 00:54:47.023752 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/98x 378\n" May 25 00:54:47.191: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3072 --since=24h' May 25 00:54:47.316: INFO: stderr: "" May 25 00:54:47.316: INFO: stdout: "I0525 00:54:42.623621 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/k427 393\nI0525 00:54:42.823919 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/z2w 383\nI0525 00:54:43.023805 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/t77z 562\nI0525 00:54:43.223777 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/t4p8 260\nI0525 00:54:43.423804 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/lcm6 571\nI0525 00:54:43.623800 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/qgw8 527\nI0525 00:54:43.823833 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/66hk 280\nI0525 00:54:44.023830 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/j8b 564\nI0525 00:54:44.223750 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/b5x4 303\nI0525 00:54:44.423863 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/x2q4 306\nI0525 00:54:44.623871 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/tc8 412\nI0525 00:54:44.823817 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/6fg 334\nI0525 00:54:45.023777 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/qvc 329\nI0525 00:54:45.223759 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/qkj 508\nI0525 00:54:45.423797 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/v8v 241\nI0525 00:54:45.623838 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/vqb 495\nI0525 00:54:45.823835 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/knpc 278\nI0525 00:54:46.023779 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/hvfb 244\nI0525 00:54:46.223779 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/n7f 415\nI0525 00:54:46.423850 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/8v9 200\nI0525 00:54:46.623826 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/7cm 296\nI0525 00:54:46.823796 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/cxcn 398\nI0525 00:54:47.023752 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/98x 378\nI0525 00:54:47.223769 1 logs_generator.go:76] 23 GET /api/v1/namespaces/ns/pods/cqhf 319\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 May 25 00:54:47.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3072' May 25 00:54:54.905: INFO: stderr: "" May 25 00:54:54.905: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:54:54.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3072" for this suite. • [SLOW TEST:15.082 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":288,"completed":219,"skipped":3471,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:54:54.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-phnw STEP: Creating a pod to test atomic-volume-subpath May 25 00:54:55.012: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-phnw" in namespace "subpath-3569" to be "Succeeded or Failed" May 25 00:54:55.016: INFO: Pod "pod-subpath-test-downwardapi-phnw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017181ms May 25 00:54:57.022: INFO: Pod "pod-subpath-test-downwardapi-phnw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010093773s May 25 00:54:59.027: INFO: Pod "pod-subpath-test-downwardapi-phnw": Phase="Running", Reason="", readiness=true. Elapsed: 4.014909994s May 25 00:55:01.032: INFO: Pod "pod-subpath-test-downwardapi-phnw": Phase="Running", Reason="", readiness=true. Elapsed: 6.019375658s May 25 00:55:03.036: INFO: Pod "pod-subpath-test-downwardapi-phnw": Phase="Running", Reason="", readiness=true. Elapsed: 8.023723823s May 25 00:55:05.040: INFO: Pod "pod-subpath-test-downwardapi-phnw": Phase="Running", Reason="", readiness=true. Elapsed: 10.027825105s May 25 00:55:07.045: INFO: Pod "pod-subpath-test-downwardapi-phnw": Phase="Running", Reason="", readiness=true. Elapsed: 12.032833836s May 25 00:55:09.049: INFO: Pod "pod-subpath-test-downwardapi-phnw": Phase="Running", Reason="", readiness=true. Elapsed: 14.037047226s May 25 00:55:11.054: INFO: Pod "pod-subpath-test-downwardapi-phnw": Phase="Running", Reason="", readiness=true. Elapsed: 16.041822209s May 25 00:55:13.058: INFO: Pod "pod-subpath-test-downwardapi-phnw": Phase="Running", Reason="", readiness=true. Elapsed: 18.045178035s May 25 00:55:15.062: INFO: Pod "pod-subpath-test-downwardapi-phnw": Phase="Running", Reason="", readiness=true. Elapsed: 20.049164656s May 25 00:55:17.065: INFO: Pod "pod-subpath-test-downwardapi-phnw": Phase="Running", Reason="", readiness=true. Elapsed: 22.052876588s May 25 00:55:19.069: INFO: Pod "pod-subpath-test-downwardapi-phnw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.056714282s STEP: Saw pod success May 25 00:55:19.069: INFO: Pod "pod-subpath-test-downwardapi-phnw" satisfied condition "Succeeded or Failed" May 25 00:55:19.072: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-phnw container test-container-subpath-downwardapi-phnw: STEP: delete the pod May 25 00:55:19.283: INFO: Waiting for pod pod-subpath-test-downwardapi-phnw to disappear May 25 00:55:19.322: INFO: Pod pod-subpath-test-downwardapi-phnw no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-phnw May 25 00:55:19.322: INFO: Deleting pod "pod-subpath-test-downwardapi-phnw" in namespace "subpath-3569" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:55:19.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3569" for this suite. • [SLOW TEST:24.416 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":288,"completed":220,"skipped":3547,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:55:19.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments May 25 00:55:19.475: INFO: Waiting up to 5m0s for pod "client-containers-adc41a1a-7401-4806-b5b4-1f50f4b03d2a" in namespace "containers-2879" to be "Succeeded or Failed" May 25 00:55:19.478: INFO: Pod "client-containers-adc41a1a-7401-4806-b5b4-1f50f4b03d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.958035ms May 25 00:55:21.496: INFO: Pod "client-containers-adc41a1a-7401-4806-b5b4-1f50f4b03d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020476158s May 25 00:55:23.500: INFO: Pod "client-containers-adc41a1a-7401-4806-b5b4-1f50f4b03d2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024896049s STEP: Saw pod success May 25 00:55:23.500: INFO: Pod "client-containers-adc41a1a-7401-4806-b5b4-1f50f4b03d2a" satisfied condition "Succeeded or Failed" May 25 00:55:23.503: INFO: Trying to get logs from node latest-worker pod client-containers-adc41a1a-7401-4806-b5b4-1f50f4b03d2a container test-container: STEP: delete the pod May 25 00:55:23.576: INFO: Waiting for pod client-containers-adc41a1a-7401-4806-b5b4-1f50f4b03d2a to disappear May 25 00:55:23.587: INFO: Pod client-containers-adc41a1a-7401-4806-b5b4-1f50f4b03d2a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:55:23.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2879" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":288,"completed":221,"skipped":3553,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:55:23.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 25 00:55:23.655: INFO: Waiting up to 5m0s for pod "downwardapi-volume-076e1ce6-8e00-4414-bde4-b0270d6903a4" in namespace "projected-7059" to be "Succeeded or Failed" May 25 00:55:23.659: INFO: Pod "downwardapi-volume-076e1ce6-8e00-4414-bde4-b0270d6903a4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.715502ms May 25 00:55:25.663: INFO: Pod "downwardapi-volume-076e1ce6-8e00-4414-bde4-b0270d6903a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008152971s May 25 00:55:27.667: INFO: Pod "downwardapi-volume-076e1ce6-8e00-4414-bde4-b0270d6903a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012657414s STEP: Saw pod success May 25 00:55:27.668: INFO: Pod "downwardapi-volume-076e1ce6-8e00-4414-bde4-b0270d6903a4" satisfied condition "Succeeded or Failed" May 25 00:55:27.671: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-076e1ce6-8e00-4414-bde4-b0270d6903a4 container client-container: STEP: delete the pod May 25 00:55:27.738: INFO: Waiting for pod downwardapi-volume-076e1ce6-8e00-4414-bde4-b0270d6903a4 to disappear May 25 00:55:27.743: INFO: Pod downwardapi-volume-076e1ce6-8e00-4414-bde4-b0270d6903a4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:55:27.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7059" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":222,"skipped":3559,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:55:27.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 25 00:55:27.896: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b970b8d2-4d06-4b1c-a785-fe98781abd4b" in namespace "projected-9165" to be "Succeeded or Failed" May 25 00:55:27.913: INFO: Pod "downwardapi-volume-b970b8d2-4d06-4b1c-a785-fe98781abd4b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.973442ms May 25 00:55:29.951: INFO: Pod "downwardapi-volume-b970b8d2-4d06-4b1c-a785-fe98781abd4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055178555s May 25 00:55:31.956: INFO: Pod "downwardapi-volume-b970b8d2-4d06-4b1c-a785-fe98781abd4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059876098s STEP: Saw pod success May 25 00:55:31.956: INFO: Pod "downwardapi-volume-b970b8d2-4d06-4b1c-a785-fe98781abd4b" satisfied condition "Succeeded or Failed" May 25 00:55:31.960: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b970b8d2-4d06-4b1c-a785-fe98781abd4b container client-container: STEP: delete the pod May 25 00:55:31.979: INFO: Waiting for pod downwardapi-volume-b970b8d2-4d06-4b1c-a785-fe98781abd4b to disappear May 25 00:55:31.982: INFO: Pod downwardapi-volume-b970b8d2-4d06-4b1c-a785-fe98781abd4b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:55:31.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9165" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":223,"skipped":3571,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:55:31.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-f897c307-76bf-4d37-8562-587f6f898f3c STEP: Creating a pod to test consume configMaps May 25 00:55:32.185: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5c179f7b-b7a9-40a6-a555-766d7b4510c0" in namespace "projected-2880" to be "Succeeded or Failed" May 25 00:55:32.192: INFO: Pod "pod-projected-configmaps-5c179f7b-b7a9-40a6-a555-766d7b4510c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.192301ms May 25 00:55:34.196: INFO: Pod "pod-projected-configmaps-5c179f7b-b7a9-40a6-a555-766d7b4510c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010614153s May 25 00:55:36.201: INFO: Pod "pod-projected-configmaps-5c179f7b-b7a9-40a6-a555-766d7b4510c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015406681s STEP: Saw pod success May 25 00:55:36.201: INFO: Pod "pod-projected-configmaps-5c179f7b-b7a9-40a6-a555-766d7b4510c0" satisfied condition "Succeeded or Failed" May 25 00:55:36.204: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-5c179f7b-b7a9-40a6-a555-766d7b4510c0 container projected-configmap-volume-test: STEP: delete the pod May 25 00:55:36.299: INFO: Waiting for pod pod-projected-configmaps-5c179f7b-b7a9-40a6-a555-766d7b4510c0 to disappear May 25 00:55:36.388: INFO: Pod pod-projected-configmaps-5c179f7b-b7a9-40a6-a555-766d7b4510c0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:55:36.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2880" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":224,"skipped":3600,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:55:36.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-lw4z STEP: Creating a pod to test atomic-volume-subpath May 25 00:55:36.469: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-lw4z" in namespace "subpath-1987" to be "Succeeded or Failed" May 25 00:55:36.515: INFO: Pod "pod-subpath-test-projected-lw4z": Phase="Pending", Reason="", readiness=false. Elapsed: 45.082734ms May 25 00:55:38.519: INFO: Pod "pod-subpath-test-projected-lw4z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049735216s May 25 00:55:40.522: INFO: Pod "pod-subpath-test-projected-lw4z": Phase="Running", Reason="", readiness=true. Elapsed: 4.052958403s May 25 00:55:42.527: INFO: Pod "pod-subpath-test-projected-lw4z": Phase="Running", Reason="", readiness=true. Elapsed: 6.0572758s May 25 00:55:44.531: INFO: Pod "pod-subpath-test-projected-lw4z": Phase="Running", Reason="", readiness=true. Elapsed: 8.061997114s May 25 00:55:46.536: INFO: Pod "pod-subpath-test-projected-lw4z": Phase="Running", Reason="", readiness=true. Elapsed: 10.066429314s May 25 00:55:48.541: INFO: Pod "pod-subpath-test-projected-lw4z": Phase="Running", Reason="", readiness=true. Elapsed: 12.071901567s May 25 00:55:50.546: INFO: Pod "pod-subpath-test-projected-lw4z": Phase="Running", Reason="", readiness=true. Elapsed: 14.076818686s May 25 00:55:52.551: INFO: Pod "pod-subpath-test-projected-lw4z": Phase="Running", Reason="", readiness=true. Elapsed: 16.081646436s May 25 00:55:54.558: INFO: Pod "pod-subpath-test-projected-lw4z": Phase="Running", Reason="", readiness=true. Elapsed: 18.088917781s May 25 00:55:56.563: INFO: Pod "pod-subpath-test-projected-lw4z": Phase="Running", Reason="", readiness=true. Elapsed: 20.093454923s May 25 00:55:58.567: INFO: Pod "pod-subpath-test-projected-lw4z": Phase="Running", Reason="", readiness=true. Elapsed: 22.097491727s May 25 00:56:00.572: INFO: Pod "pod-subpath-test-projected-lw4z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.102227399s STEP: Saw pod success May 25 00:56:00.572: INFO: Pod "pod-subpath-test-projected-lw4z" satisfied condition "Succeeded or Failed" May 25 00:56:00.575: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-lw4z container test-container-subpath-projected-lw4z: STEP: delete the pod May 25 00:56:00.619: INFO: Waiting for pod pod-subpath-test-projected-lw4z to disappear May 25 00:56:00.630: INFO: Pod pod-subpath-test-projected-lw4z no longer exists STEP: Deleting pod pod-subpath-test-projected-lw4z May 25 00:56:00.630: INFO: Deleting pod "pod-subpath-test-projected-lw4z" in namespace "subpath-1987" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:56:00.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1987" for this suite. • [SLOW TEST:24.265 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":288,"completed":225,"skipped":3603,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:56:00.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6515.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6515.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 00:56:06.766: INFO: DNS probes using dns-6515/dns-test-e87d1109-2060-41fb-b4d4-bd1f51b446c1 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:56:06.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6515" for this suite. • [SLOW TEST:6.173 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":288,"completed":226,"skipped":3635,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:56:06.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:56:07.184: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:56:11.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5277" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":288,"completed":227,"skipped":3653,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:56:11.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 25 00:56:11.391: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f99edc71-fbf7-4450-b826-1fc101d7e1fc" in namespace "downward-api-4994" to be "Succeeded or Failed" May 25 00:56:11.409: INFO: Pod "downwardapi-volume-f99edc71-fbf7-4450-b826-1fc101d7e1fc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.337653ms May 25 00:56:13.413: INFO: Pod "downwardapi-volume-f99edc71-fbf7-4450-b826-1fc101d7e1fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022232092s May 25 00:56:15.417: INFO: Pod "downwardapi-volume-f99edc71-fbf7-4450-b826-1fc101d7e1fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025935692s STEP: Saw pod success May 25 00:56:15.417: INFO: Pod "downwardapi-volume-f99edc71-fbf7-4450-b826-1fc101d7e1fc" satisfied condition "Succeeded or Failed" May 25 00:56:15.420: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-f99edc71-fbf7-4450-b826-1fc101d7e1fc container client-container: STEP: delete the pod May 25 00:56:15.467: INFO: Waiting for pod downwardapi-volume-f99edc71-fbf7-4450-b826-1fc101d7e1fc to disappear May 25 00:56:15.481: INFO: Pod downwardapi-volume-f99edc71-fbf7-4450-b826-1fc101d7e1fc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:56:15.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4994" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":228,"skipped":3663,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:56:15.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:56:15.572: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:56:16.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3533" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":288,"completed":229,"skipped":3677,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:56:16.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 25 00:56:16.945: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad50c6e6-38b8-4c1f-acfb-ea6572f10184" in namespace "downward-api-3977" to be "Succeeded or Failed" May 25 00:56:16.948: INFO: Pod "downwardapi-volume-ad50c6e6-38b8-4c1f-acfb-ea6572f10184": Phase="Pending", Reason="", readiness=false. Elapsed: 3.105608ms May 25 00:56:18.957: INFO: Pod "downwardapi-volume-ad50c6e6-38b8-4c1f-acfb-ea6572f10184": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012306244s May 25 00:56:20.962: INFO: Pod "downwardapi-volume-ad50c6e6-38b8-4c1f-acfb-ea6572f10184": Phase="Running", Reason="", readiness=true. Elapsed: 4.016433189s May 25 00:56:22.966: INFO: Pod "downwardapi-volume-ad50c6e6-38b8-4c1f-acfb-ea6572f10184": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021263356s STEP: Saw pod success May 25 00:56:22.966: INFO: Pod "downwardapi-volume-ad50c6e6-38b8-4c1f-acfb-ea6572f10184" satisfied condition "Succeeded or Failed" May 25 00:56:22.970: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ad50c6e6-38b8-4c1f-acfb-ea6572f10184 container client-container: STEP: delete the pod May 25 00:56:23.007: INFO: Waiting for pod downwardapi-volume-ad50c6e6-38b8-4c1f-acfb-ea6572f10184 to disappear May 25 00:56:23.020: INFO: Pod downwardapi-volume-ad50c6e6-38b8-4c1f-acfb-ea6572f10184 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:56:23.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3977" for this suite. • [SLOW TEST:6.175 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":230,"skipped":3701,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:56:23.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 25 00:56:23.112: INFO: Waiting up to 5m0s for pod "downward-api-89a14315-41ea-489d-acea-efe2f592bd84" in namespace "downward-api-1709" to be "Succeeded or Failed" May 25 00:56:23.122: INFO: Pod "downward-api-89a14315-41ea-489d-acea-efe2f592bd84": Phase="Pending", Reason="", readiness=false. Elapsed: 10.193249ms May 25 00:56:25.131: INFO: Pod "downward-api-89a14315-41ea-489d-acea-efe2f592bd84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01875818s May 25 00:56:27.135: INFO: Pod "downward-api-89a14315-41ea-489d-acea-efe2f592bd84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022494179s STEP: Saw pod success May 25 00:56:27.135: INFO: Pod "downward-api-89a14315-41ea-489d-acea-efe2f592bd84" satisfied condition "Succeeded or Failed" May 25 00:56:27.138: INFO: Trying to get logs from node latest-worker2 pod downward-api-89a14315-41ea-489d-acea-efe2f592bd84 container dapi-container: STEP: delete the pod May 25 00:56:27.211: INFO: Waiting for pod downward-api-89a14315-41ea-489d-acea-efe2f592bd84 to disappear May 25 00:56:27.218: INFO: Pod downward-api-89a14315-41ea-489d-acea-efe2f592bd84 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:56:27.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1709" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":288,"completed":231,"skipped":3726,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:56:27.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 25 00:56:35.392: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 25 00:56:35.455: INFO: Pod pod-with-prestop-http-hook still exists May 25 00:56:37.455: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 25 00:56:37.459: INFO: Pod pod-with-prestop-http-hook still exists May 25 00:56:39.455: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 25 00:56:39.461: INFO: Pod pod-with-prestop-http-hook still exists May 25 00:56:41.455: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 25 00:56:41.461: INFO: Pod pod-with-prestop-http-hook still exists May 25 00:56:43.455: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 25 00:56:43.460: INFO: Pod pod-with-prestop-http-hook still exists May 25 00:56:45.455: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 25 00:56:45.460: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:56:45.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-641" for this suite. • [SLOW TEST:18.248 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":288,"completed":232,"skipped":3733,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:56:45.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-96c06a33-0986-4b40-ad31-58a4d6239265 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:56:45.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2939" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":288,"completed":233,"skipped":3739,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:56:45.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:56:56.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4232" for this suite. • [SLOW TEST:11.219 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":288,"completed":234,"skipped":3772,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:56:56.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 25 00:56:56.881: INFO: >>> kubeConfig: /root/.kube/config May 25 00:56:59.826: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:57:09.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2781" for this suite. • [SLOW TEST:12.620 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":288,"completed":235,"skipped":3799,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:57:09.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:57:09.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-5532" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":288,"completed":236,"skipped":3808,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:57:09.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 00:57:10.519: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 00:57:12.622: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965030, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965030, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965030, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965030, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 00:57:14.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965030, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965030, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965030, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965030, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 00:57:17.664: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:57:17.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1821" for this suite. STEP: Destroying namespace "webhook-1821-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.349 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":288,"completed":237,"skipped":3826,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:57:18.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 25 00:57:18.078: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:57:33.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3348" for this suite. • [SLOW TEST:15.507 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":288,"completed":238,"skipped":3861,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:57:33.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-48ece6e1-a12f-4523-a144-f8b05fde87ac STEP: Creating a pod to test consume configMaps May 25 00:57:33.687: INFO: Waiting up to 5m0s for pod "pod-configmaps-2eee7215-2fba-42b1-a464-12d7dc9eb5bf" in namespace "configmap-690" to be "Succeeded or Failed" May 25 00:57:33.700: INFO: Pod "pod-configmaps-2eee7215-2fba-42b1-a464-12d7dc9eb5bf": Phase="Pending", Reason="", readiness=false. Elapsed: 13.650054ms May 25 00:57:35.802: INFO: Pod "pod-configmaps-2eee7215-2fba-42b1-a464-12d7dc9eb5bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115569109s May 25 00:57:37.807: INFO: Pod "pod-configmaps-2eee7215-2fba-42b1-a464-12d7dc9eb5bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120484976s STEP: Saw pod success May 25 00:57:37.807: INFO: Pod "pod-configmaps-2eee7215-2fba-42b1-a464-12d7dc9eb5bf" satisfied condition "Succeeded or Failed" May 25 00:57:37.810: INFO: Trying to get logs from node latest-worker pod pod-configmaps-2eee7215-2fba-42b1-a464-12d7dc9eb5bf container configmap-volume-test: STEP: delete the pod May 25 00:57:37.867: INFO: Waiting for pod pod-configmaps-2eee7215-2fba-42b1-a464-12d7dc9eb5bf to disappear May 25 00:57:37.904: INFO: Pod pod-configmaps-2eee7215-2fba-42b1-a464-12d7dc9eb5bf no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:57:37.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-690" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":239,"skipped":3868,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:57:37.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 25 00:57:38.056: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8cfd32b-a4b8-4e6b-be99-794aea2a5be5" in namespace "downward-api-7330" to be "Succeeded or Failed" May 25 00:57:38.074: INFO: Pod "downwardapi-volume-a8cfd32b-a4b8-4e6b-be99-794aea2a5be5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.667534ms May 25 00:57:40.096: INFO: Pod "downwardapi-volume-a8cfd32b-a4b8-4e6b-be99-794aea2a5be5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040730837s May 25 00:57:42.126: INFO: Pod "downwardapi-volume-a8cfd32b-a4b8-4e6b-be99-794aea2a5be5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070573119s STEP: Saw pod success May 25 00:57:42.126: INFO: Pod "downwardapi-volume-a8cfd32b-a4b8-4e6b-be99-794aea2a5be5" satisfied condition "Succeeded or Failed" May 25 00:57:42.129: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a8cfd32b-a4b8-4e6b-be99-794aea2a5be5 container client-container: STEP: delete the pod May 25 00:57:42.167: INFO: Waiting for pod downwardapi-volume-a8cfd32b-a4b8-4e6b-be99-794aea2a5be5 to disappear May 25 00:57:42.173: INFO: Pod downwardapi-volume-a8cfd32b-a4b8-4e6b-be99-794aea2a5be5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:57:42.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7330" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":288,"completed":240,"skipped":3880,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:57:42.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:57:42.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1315" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":288,"completed":241,"skipped":3907,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:57:42.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-dab1a300-355b-41be-ac34-d22aaa6ca8ef STEP: Creating the pod STEP: Updating configmap configmap-test-upd-dab1a300-355b-41be-ac34-d22aaa6ca8ef STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:59:03.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3306" for this suite. • [SLOW TEST:80.644 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":242,"skipped":4008,"failed":0} S ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:59:03.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:59:03.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-562" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":288,"completed":243,"skipped":4009,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:59:03.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1713.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1713.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1713.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1713.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 00:59:09.461: INFO: DNS probes using dns-test-75700114-0d3d-4be6-941a-53e2136684d8 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1713.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1713.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1713.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1713.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 00:59:17.774: INFO: File wheezy_udp@dns-test-service-3.dns-1713.svc.cluster.local from pod dns-1713/dns-test-71cde668-8d1d-4091-b590-1c079c63a2f7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 00:59:17.778: INFO: File jessie_udp@dns-test-service-3.dns-1713.svc.cluster.local from pod dns-1713/dns-test-71cde668-8d1d-4091-b590-1c079c63a2f7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 00:59:17.778: INFO: Lookups using dns-1713/dns-test-71cde668-8d1d-4091-b590-1c079c63a2f7 failed for: [wheezy_udp@dns-test-service-3.dns-1713.svc.cluster.local jessie_udp@dns-test-service-3.dns-1713.svc.cluster.local] May 25 00:59:22.783: INFO: File wheezy_udp@dns-test-service-3.dns-1713.svc.cluster.local from pod dns-1713/dns-test-71cde668-8d1d-4091-b590-1c079c63a2f7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 00:59:22.787: INFO: File jessie_udp@dns-test-service-3.dns-1713.svc.cluster.local from pod dns-1713/dns-test-71cde668-8d1d-4091-b590-1c079c63a2f7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 00:59:22.787: INFO: Lookups using dns-1713/dns-test-71cde668-8d1d-4091-b590-1c079c63a2f7 failed for: [wheezy_udp@dns-test-service-3.dns-1713.svc.cluster.local jessie_udp@dns-test-service-3.dns-1713.svc.cluster.local] May 25 00:59:27.784: INFO: File wheezy_udp@dns-test-service-3.dns-1713.svc.cluster.local from pod dns-1713/dns-test-71cde668-8d1d-4091-b590-1c079c63a2f7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 00:59:27.788: INFO: File jessie_udp@dns-test-service-3.dns-1713.svc.cluster.local from pod dns-1713/dns-test-71cde668-8d1d-4091-b590-1c079c63a2f7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 00:59:27.788: INFO: Lookups using dns-1713/dns-test-71cde668-8d1d-4091-b590-1c079c63a2f7 failed for: [wheezy_udp@dns-test-service-3.dns-1713.svc.cluster.local jessie_udp@dns-test-service-3.dns-1713.svc.cluster.local] May 25 00:59:32.783: INFO: File wheezy_udp@dns-test-service-3.dns-1713.svc.cluster.local from pod dns-1713/dns-test-71cde668-8d1d-4091-b590-1c079c63a2f7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 00:59:32.786: INFO: File jessie_udp@dns-test-service-3.dns-1713.svc.cluster.local from pod dns-1713/dns-test-71cde668-8d1d-4091-b590-1c079c63a2f7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 00:59:32.786: INFO: Lookups using dns-1713/dns-test-71cde668-8d1d-4091-b590-1c079c63a2f7 failed for: [wheezy_udp@dns-test-service-3.dns-1713.svc.cluster.local jessie_udp@dns-test-service-3.dns-1713.svc.cluster.local] May 25 00:59:37.787: INFO: File wheezy_udp@dns-test-service-3.dns-1713.svc.cluster.local from pod dns-1713/dns-test-71cde668-8d1d-4091-b590-1c079c63a2f7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 00:59:37.791: INFO: File jessie_udp@dns-test-service-3.dns-1713.svc.cluster.local from pod dns-1713/dns-test-71cde668-8d1d-4091-b590-1c079c63a2f7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 00:59:37.791: INFO: Lookups using dns-1713/dns-test-71cde668-8d1d-4091-b590-1c079c63a2f7 failed for: [wheezy_udp@dns-test-service-3.dns-1713.svc.cluster.local jessie_udp@dns-test-service-3.dns-1713.svc.cluster.local] May 25 00:59:42.787: INFO: DNS probes using dns-test-71cde668-8d1d-4091-b590-1c079c63a2f7 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1713.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1713.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1713.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1713.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 00:59:49.598: INFO: DNS probes using dns-test-30b757b7-9e72-4a26-8d7e-adf936156d5b succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:59:49.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1713" for this suite. • [SLOW TEST:46.521 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":288,"completed":244,"skipped":4026,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:59:49.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 00:59:50.048: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:59:56.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1619" for this suite. • [SLOW TEST:6.625 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":288,"completed":245,"skipped":4047,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:59:56.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 00:59:56.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6398" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":288,"completed":246,"skipped":4061,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 00:59:56.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 25 00:59:56.790: INFO: Waiting up to 5m0s for pod "pod-aa37bddf-3b8b-439f-a083-a6d82f981377" in namespace "emptydir-9851" to be "Succeeded or Failed" May 25 00:59:56.794: INFO: Pod "pod-aa37bddf-3b8b-439f-a083-a6d82f981377": Phase="Pending", Reason="", readiness=false. Elapsed: 3.755653ms May 25 00:59:58.798: INFO: Pod "pod-aa37bddf-3b8b-439f-a083-a6d82f981377": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008111474s May 25 01:00:00.803: INFO: Pod "pod-aa37bddf-3b8b-439f-a083-a6d82f981377": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012552495s STEP: Saw pod success May 25 01:00:00.803: INFO: Pod "pod-aa37bddf-3b8b-439f-a083-a6d82f981377" satisfied condition "Succeeded or Failed" May 25 01:00:00.806: INFO: Trying to get logs from node latest-worker pod pod-aa37bddf-3b8b-439f-a083-a6d82f981377 container test-container: STEP: delete the pod May 25 01:00:00.844: INFO: Waiting for pod pod-aa37bddf-3b8b-439f-a083-a6d82f981377 to disappear May 25 01:00:00.853: INFO: Pod pod-aa37bddf-3b8b-439f-a083-a6d82f981377 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:00:00.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9851" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":247,"skipped":4074,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:00:00.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-3853c9c2-6db0-4bbe-ba34-fb7baa26624a STEP: Creating a pod to test consume secrets May 25 01:00:00.988: INFO: Waiting up to 5m0s for pod "pod-secrets-e523eda3-c0a3-4530-8c70-e5e21237f087" in namespace "secrets-164" to be "Succeeded or Failed" May 25 01:00:00.991: INFO: Pod "pod-secrets-e523eda3-c0a3-4530-8c70-e5e21237f087": Phase="Pending", Reason="", readiness=false. Elapsed: 3.138813ms May 25 01:00:02.996: INFO: Pod "pod-secrets-e523eda3-c0a3-4530-8c70-e5e21237f087": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007506981s May 25 01:00:04.999: INFO: Pod "pod-secrets-e523eda3-c0a3-4530-8c70-e5e21237f087": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011173963s STEP: Saw pod success May 25 01:00:04.999: INFO: Pod "pod-secrets-e523eda3-c0a3-4530-8c70-e5e21237f087" satisfied condition "Succeeded or Failed" May 25 01:00:05.002: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-e523eda3-c0a3-4530-8c70-e5e21237f087 container secret-volume-test: STEP: delete the pod May 25 01:00:05.091: INFO: Waiting for pod pod-secrets-e523eda3-c0a3-4530-8c70-e5e21237f087 to disappear May 25 01:00:05.125: INFO: Pod pod-secrets-e523eda3-c0a3-4530-8c70-e5e21237f087 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:00:05.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-164" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":248,"skipped":4076,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:00:05.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 25 01:00:05.231: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:00:22.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8917" for this suite. • [SLOW TEST:17.083 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":288,"completed":249,"skipped":4078,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:00:22.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 25 01:00:26.854: INFO: Successfully updated pod "pod-update-activedeadlineseconds-3a80fb72-5ab0-42c1-8b33-6fad486f96e3" May 25 01:00:26.854: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-3a80fb72-5ab0-42c1-8b33-6fad486f96e3" in namespace "pods-4461" to be "terminated due to deadline exceeded" May 25 01:00:26.880: INFO: Pod "pod-update-activedeadlineseconds-3a80fb72-5ab0-42c1-8b33-6fad486f96e3": Phase="Running", Reason="", readiness=true. Elapsed: 26.203989ms May 25 01:00:28.885: INFO: Pod "pod-update-activedeadlineseconds-3a80fb72-5ab0-42c1-8b33-6fad486f96e3": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.030556252s May 25 01:00:28.885: INFO: Pod "pod-update-activedeadlineseconds-3a80fb72-5ab0-42c1-8b33-6fad486f96e3" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:00:28.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4461" for this suite. • [SLOW TEST:6.677 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":288,"completed":250,"skipped":4098,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:00:28.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 25 01:00:28.956: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38d41f71-4b84-4bd0-86e8-193455e9dc25" in namespace "projected-6036" to be "Succeeded or Failed" May 25 01:00:28.974: INFO: Pod "downwardapi-volume-38d41f71-4b84-4bd0-86e8-193455e9dc25": Phase="Pending", Reason="", readiness=false. Elapsed: 17.748835ms May 25 01:00:30.979: INFO: Pod "downwardapi-volume-38d41f71-4b84-4bd0-86e8-193455e9dc25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022728789s May 25 01:00:32.983: INFO: Pod "downwardapi-volume-38d41f71-4b84-4bd0-86e8-193455e9dc25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027027172s STEP: Saw pod success May 25 01:00:32.983: INFO: Pod "downwardapi-volume-38d41f71-4b84-4bd0-86e8-193455e9dc25" satisfied condition "Succeeded or Failed" May 25 01:00:32.986: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-38d41f71-4b84-4bd0-86e8-193455e9dc25 container client-container: STEP: delete the pod May 25 01:00:33.057: INFO: Waiting for pod downwardapi-volume-38d41f71-4b84-4bd0-86e8-193455e9dc25 to disappear May 25 01:00:33.071: INFO: Pod downwardapi-volume-38d41f71-4b84-4bd0-86e8-193455e9dc25 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:00:33.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6036" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":251,"skipped":4104,"failed":0} SSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:00:33.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-3184 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3184 STEP: Deleting pre-stop pod May 25 01:00:46.327: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:00:46.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3184" for this suite. • [SLOW TEST:13.305 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":288,"completed":252,"skipped":4112,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:00:46.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy May 25 01:00:46.421: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix346454664/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:00:46.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5773" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":288,"completed":253,"skipped":4116,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:00:46.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 25 01:00:46.812: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. May 25 01:00:47.385: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 25 01:00:49.773: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965247, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965247, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965247, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965247, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 01:00:51.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965247, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965247, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965247, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965247, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 01:00:54.306: INFO: Waited 520.909206ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:00:54.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2196" for this suite. • [SLOW TEST:8.116 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":288,"completed":254,"skipped":4205,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:00:54.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-b0de914a-494f-4974-a765-9740abfe94e8 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-b0de914a-494f-4974-a765-9740abfe94e8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:01:01.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2904" for this suite. • [SLOW TEST:6.183 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":255,"skipped":4219,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:01:01.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 25 01:01:01.114: INFO: Waiting up to 5m0s for pod "pod-f92a933a-d11a-452c-a39c-05128e94e6b1" in namespace "emptydir-9911" to be "Succeeded or Failed" May 25 01:01:01.138: INFO: Pod "pod-f92a933a-d11a-452c-a39c-05128e94e6b1": Phase="Pending", Reason="", readiness=false. Elapsed: 23.841265ms May 25 01:01:03.141: INFO: Pod "pod-f92a933a-d11a-452c-a39c-05128e94e6b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027117501s May 25 01:01:05.145: INFO: Pod "pod-f92a933a-d11a-452c-a39c-05128e94e6b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031425104s STEP: Saw pod success May 25 01:01:05.146: INFO: Pod "pod-f92a933a-d11a-452c-a39c-05128e94e6b1" satisfied condition "Succeeded or Failed" May 25 01:01:05.149: INFO: Trying to get logs from node latest-worker2 pod pod-f92a933a-d11a-452c-a39c-05128e94e6b1 container test-container: STEP: delete the pod May 25 01:01:05.169: INFO: Waiting for pod pod-f92a933a-d11a-452c-a39c-05128e94e6b1 to disappear May 25 01:01:05.173: INFO: Pod pod-f92a933a-d11a-452c-a39c-05128e94e6b1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:01:05.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9911" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":256,"skipped":4240,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:01:05.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:01:16.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3788" for this suite. • [SLOW TEST:11.242 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":288,"completed":257,"skipped":4242,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:01:16.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 25 01:01:16.543: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df432d28-0d03-484f-bb85-c80f498028d5" in namespace "downward-api-7036" to be "Succeeded or Failed" May 25 01:01:16.569: INFO: Pod "downwardapi-volume-df432d28-0d03-484f-bb85-c80f498028d5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.308686ms May 25 01:01:18.703: INFO: Pod "downwardapi-volume-df432d28-0d03-484f-bb85-c80f498028d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160017971s May 25 01:01:20.707: INFO: Pod "downwardapi-volume-df432d28-0d03-484f-bb85-c80f498028d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.164546142s STEP: Saw pod success May 25 01:01:20.707: INFO: Pod "downwardapi-volume-df432d28-0d03-484f-bb85-c80f498028d5" satisfied condition "Succeeded or Failed" May 25 01:01:20.710: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-df432d28-0d03-484f-bb85-c80f498028d5 container client-container: STEP: delete the pod May 25 01:01:20.751: INFO: Waiting for pod downwardapi-volume-df432d28-0d03-484f-bb85-c80f498028d5 to disappear May 25 01:01:20.761: INFO: Pod downwardapi-volume-df432d28-0d03-484f-bb85-c80f498028d5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:01:20.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7036" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":258,"skipped":4246,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:01:20.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:01:25.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6177" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":288,"completed":259,"skipped":4271,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:01:25.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0525 01:01:37.828841 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 25 01:01:37.828: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:01:37.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-449" for this suite. • [SLOW TEST:12.476 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":288,"completed":260,"skipped":4290,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:01:37.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:01:38.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-594" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":288,"completed":261,"skipped":4309,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:01:38.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 01:01:39.533: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 01:01:41.585: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965299, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965299, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965299, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965299, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 01:01:44.991: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 01:01:45.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7749-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:01:47.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5567" for this suite. STEP: Destroying namespace "webhook-5567-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.884 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":288,"completed":262,"skipped":4316,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:01:47.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 25 01:01:47.359: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38cf5e3a-2bf7-4ed6-80bf-690283c948ed" in namespace "projected-9846" to be "Succeeded or Failed" May 25 01:01:47.375: INFO: Pod "downwardapi-volume-38cf5e3a-2bf7-4ed6-80bf-690283c948ed": Phase="Pending", Reason="", readiness=false. Elapsed: 15.995198ms May 25 01:01:49.379: INFO: Pod "downwardapi-volume-38cf5e3a-2bf7-4ed6-80bf-690283c948ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020153373s May 25 01:01:51.383: INFO: Pod "downwardapi-volume-38cf5e3a-2bf7-4ed6-80bf-690283c948ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023768993s STEP: Saw pod success May 25 01:01:51.383: INFO: Pod "downwardapi-volume-38cf5e3a-2bf7-4ed6-80bf-690283c948ed" satisfied condition "Succeeded or Failed" May 25 01:01:51.409: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-38cf5e3a-2bf7-4ed6-80bf-690283c948ed container client-container: STEP: delete the pod May 25 01:01:51.444: INFO: Waiting for pod downwardapi-volume-38cf5e3a-2bf7-4ed6-80bf-690283c948ed to disappear May 25 01:01:51.451: INFO: Pod downwardapi-volume-38cf5e3a-2bf7-4ed6-80bf-690283c948ed no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:01:51.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9846" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":263,"skipped":4326,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:01:51.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info May 25 01:01:51.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' May 25 01:01:51.672: INFO: stderr: "" May 25 01:01:51.672: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:01:51.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7031" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":288,"completed":264,"skipped":4367,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:01:51.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:01:51.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5682" for this suite. STEP: Destroying namespace "nspatchtest-893c5e60-7ffc-4874-aa9c-d555b650ad87-3368" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":288,"completed":265,"skipped":4383,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:01:51.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 25 01:01:52.701: INFO: Pod name wrapped-volume-race-7d98b3e6-a905-41b2-b661-01b7fe5568a6: Found 0 pods out of 5 May 25 01:01:57.712: INFO: Pod name wrapped-volume-race-7d98b3e6-a905-41b2-b661-01b7fe5568a6: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7d98b3e6-a905-41b2-b661-01b7fe5568a6 in namespace emptydir-wrapper-7673, will wait for the garbage collector to delete the pods May 25 01:02:11.824: INFO: Deleting ReplicationController wrapped-volume-race-7d98b3e6-a905-41b2-b661-01b7fe5568a6 took: 6.74648ms May 25 01:02:12.224: INFO: Terminating ReplicationController wrapped-volume-race-7d98b3e6-a905-41b2-b661-01b7fe5568a6 pods took: 400.302927ms STEP: Creating RC which spawns configmap-volume pods May 25 01:02:25.611: INFO: Pod name wrapped-volume-race-5e299f8c-5cad-4cc2-85d4-4cd7e3177aea: Found 0 pods out of 5 May 25 01:02:30.620: INFO: Pod name wrapped-volume-race-5e299f8c-5cad-4cc2-85d4-4cd7e3177aea: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5e299f8c-5cad-4cc2-85d4-4cd7e3177aea in namespace emptydir-wrapper-7673, will wait for the garbage collector to delete the pods May 25 01:02:44.734: INFO: Deleting ReplicationController wrapped-volume-race-5e299f8c-5cad-4cc2-85d4-4cd7e3177aea took: 7.526755ms May 25 01:02:45.134: INFO: Terminating ReplicationController wrapped-volume-race-5e299f8c-5cad-4cc2-85d4-4cd7e3177aea pods took: 400.227939ms STEP: Creating RC which spawns configmap-volume pods May 25 01:02:55.008: INFO: Pod name wrapped-volume-race-1ed80bd3-6bdb-4cdd-8314-3a492376a155: Found 0 pods out of 5 May 25 01:03:00.018: INFO: Pod name wrapped-volume-race-1ed80bd3-6bdb-4cdd-8314-3a492376a155: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1ed80bd3-6bdb-4cdd-8314-3a492376a155 in namespace emptydir-wrapper-7673, will wait for the garbage collector to delete the pods May 25 01:03:14.121: INFO: Deleting ReplicationController wrapped-volume-race-1ed80bd3-6bdb-4cdd-8314-3a492376a155 took: 32.916008ms May 25 01:03:14.421: INFO: Terminating ReplicationController wrapped-volume-race-1ed80bd3-6bdb-4cdd-8314-3a492376a155 pods took: 300.241004ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:03:26.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7673" for this suite. • [SLOW TEST:94.761 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":288,"completed":266,"skipped":4386,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:03:26.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 01:03:26.720: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' May 25 01:03:26.857: INFO: stderr: "" May 25 01:03:26.857: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.3.35+3416442e4b7eeb\", GitCommit:\"3416442e4b7eebfce360f5b7468c6818d3e882f8\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:24Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:03:26.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7201" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":288,"completed":267,"skipped":4417,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:03:26.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 01:03:27.016: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 25 01:03:27.040: INFO: Number of nodes with available pods: 0 May 25 01:03:27.040: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 25 01:03:27.169: INFO: Number of nodes with available pods: 0 May 25 01:03:27.169: INFO: Node latest-worker2 is running more than one daemon pod May 25 01:03:28.243: INFO: Number of nodes with available pods: 0 May 25 01:03:28.243: INFO: Node latest-worker2 is running more than one daemon pod May 25 01:03:29.174: INFO: Number of nodes with available pods: 0 May 25 01:03:29.174: INFO: Node latest-worker2 is running more than one daemon pod May 25 01:03:30.174: INFO: Number of nodes with available pods: 0 May 25 01:03:30.174: INFO: Node latest-worker2 is running more than one daemon pod May 25 01:03:31.174: INFO: Number of nodes with available pods: 1 May 25 01:03:31.174: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 25 01:03:31.210: INFO: Number of nodes with available pods: 1 May 25 01:03:31.210: INFO: Number of running nodes: 0, number of available pods: 1 May 25 01:03:32.242: INFO: Number of nodes with available pods: 0 May 25 01:03:32.242: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 25 01:03:32.280: INFO: Number of nodes with available pods: 0 May 25 01:03:32.280: INFO: Node latest-worker2 is running more than one daemon pod May 25 01:03:33.339: INFO: Number of nodes with available pods: 0 May 25 01:03:33.339: INFO: Node latest-worker2 is running more than one daemon pod May 25 01:03:34.291: INFO: Number of nodes with available pods: 0 May 25 01:03:34.291: INFO: Node latest-worker2 is running more than one daemon pod May 25 01:03:35.283: INFO: Number of nodes with available pods: 0 May 25 01:03:35.283: INFO: Node latest-worker2 is running more than one daemon pod May 25 01:03:36.284: INFO: Number of nodes with available pods: 0 May 25 01:03:36.284: INFO: Node latest-worker2 is running more than one daemon pod May 25 01:03:37.285: INFO: Number of nodes with available pods: 0 May 25 01:03:37.285: INFO: Node latest-worker2 is running more than one daemon pod May 25 01:03:38.285: INFO: Number of nodes with available pods: 0 May 25 01:03:38.285: INFO: Node latest-worker2 is running more than one daemon pod May 25 01:03:39.285: INFO: Number of nodes with available pods: 0 May 25 01:03:39.285: INFO: Node latest-worker2 is running more than one daemon pod May 25 01:03:40.284: INFO: Number of nodes with available pods: 0 May 25 01:03:40.284: INFO: Node latest-worker2 is running more than one daemon pod May 25 01:03:41.284: INFO: Number of nodes with available pods: 0 May 25 01:03:41.284: INFO: Node latest-worker2 is running more than one daemon pod May 25 01:03:42.285: INFO: Number of nodes with available pods: 0 May 25 01:03:42.285: INFO: Node latest-worker2 is running more than one daemon pod May 25 01:03:43.285: INFO: Number of nodes with available pods: 0 May 25 01:03:43.285: INFO: Node latest-worker2 is running more than one daemon pod May 25 01:03:44.291: INFO: Number of nodes with available pods: 0 May 25 01:03:44.291: INFO: Node latest-worker2 is running more than one daemon pod May 25 01:03:45.291: INFO: Number of nodes with available pods: 0 May 25 01:03:45.291: INFO: Node latest-worker2 is running more than one daemon pod May 25 01:03:46.304: INFO: Number of nodes with available pods: 0 May 25 01:03:46.304: INFO: Node latest-worker2 is running more than one daemon pod May 25 01:03:47.344: INFO: Number of nodes with available pods: 0 May 25 01:03:47.344: INFO: Node latest-worker2 is running more than one daemon pod May 25 01:03:48.303: INFO: Number of nodes with available pods: 1 May 25 01:03:48.303: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8353, will wait for the garbage collector to delete the pods May 25 01:03:48.418: INFO: Deleting DaemonSet.extensions daemon-set took: 32.113462ms May 25 01:03:48.718: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.428582ms May 25 01:03:55.323: INFO: Number of nodes with available pods: 0 May 25 01:03:55.323: INFO: Number of running nodes: 0, number of available pods: 0 May 25 01:03:55.326: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8353/daemonsets","resourceVersion":"7434133"},"items":null} May 25 01:03:55.329: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8353/pods","resourceVersion":"7434133"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:03:55.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8353" for this suite. • [SLOW TEST:28.500 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":288,"completed":268,"skipped":4437,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:03:55.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-acb87bdf-fa2a-4076-8830-b606cdfe9cf5 in namespace container-probe-4369 May 25 01:03:59.485: INFO: Started pod busybox-acb87bdf-fa2a-4076-8830-b606cdfe9cf5 in namespace container-probe-4369 STEP: checking the pod's current state and verifying that restartCount is present May 25 01:03:59.487: INFO: Initial restart count of pod busybox-acb87bdf-fa2a-4076-8830-b606cdfe9cf5 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:08:00.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4369" for this suite. • [SLOW TEST:244.932 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":269,"skipped":4456,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:08:00.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-6351 STEP: creating a selector STEP: Creating the service pods in kubernetes May 25 01:08:00.360: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 25 01:08:00.461: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 01:08:02.515: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 01:08:04.465: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 01:08:06.466: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 01:08:08.465: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 01:08:10.465: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 01:08:12.464: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 01:08:14.470: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 01:08:16.466: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 01:08:18.466: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 01:08:20.466: INFO: The status of Pod netserver-0 is Running (Ready = true) May 25 01:08:20.472: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 25 01:08:24.558: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.233:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6351 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 01:08:24.558: INFO: >>> kubeConfig: /root/.kube/config I0525 01:08:24.596765 7 log.go:172] (0xc0026c1e40) (0xc001e75e00) Create stream I0525 01:08:24.596800 7 log.go:172] (0xc0026c1e40) (0xc001e75e00) Stream added, broadcasting: 1 I0525 01:08:24.599019 7 log.go:172] (0xc0026c1e40) Reply frame received for 1 I0525 01:08:24.599050 7 log.go:172] (0xc0026c1e40) (0xc000b96140) Create stream I0525 01:08:24.599064 7 log.go:172] (0xc0026c1e40) (0xc000b96140) Stream added, broadcasting: 3 I0525 01:08:24.600101 7 log.go:172] (0xc0026c1e40) Reply frame received for 3 I0525 01:08:24.600132 7 log.go:172] (0xc0026c1e40) (0xc000b96320) Create stream I0525 01:08:24.600143 7 log.go:172] (0xc0026c1e40) (0xc000b96320) Stream added, broadcasting: 5 I0525 01:08:24.601060 7 log.go:172] (0xc0026c1e40) Reply frame received for 5 I0525 01:08:24.704352 7 log.go:172] (0xc0026c1e40) Data frame received for 5 I0525 01:08:24.704391 7 log.go:172] (0xc000b96320) (5) Data frame handling I0525 01:08:24.704468 7 log.go:172] (0xc0026c1e40) Data frame received for 3 I0525 01:08:24.704509 7 log.go:172] (0xc000b96140) (3) Data frame handling I0525 01:08:24.704650 7 log.go:172] (0xc000b96140) (3) Data frame sent I0525 01:08:24.704675 7 log.go:172] (0xc0026c1e40) Data frame received for 3 I0525 01:08:24.704691 7 log.go:172] (0xc000b96140) (3) Data frame handling I0525 01:08:24.706605 7 log.go:172] (0xc0026c1e40) Data frame received for 1 I0525 01:08:24.706637 7 log.go:172] (0xc001e75e00) (1) Data frame handling I0525 01:08:24.706658 7 log.go:172] (0xc001e75e00) (1) Data frame sent I0525 01:08:24.706676 7 log.go:172] (0xc0026c1e40) (0xc001e75e00) Stream removed, broadcasting: 1 I0525 01:08:24.706698 7 log.go:172] (0xc0026c1e40) Go away received I0525 01:08:24.706849 7 log.go:172] (0xc0026c1e40) (0xc001e75e00) Stream removed, broadcasting: 1 I0525 01:08:24.706875 7 log.go:172] (0xc0026c1e40) (0xc000b96140) Stream removed, broadcasting: 3 I0525 01:08:24.706888 7 log.go:172] (0xc0026c1e40) (0xc000b96320) Stream removed, broadcasting: 5 May 25 01:08:24.706: INFO: Found all expected endpoints: [netserver-0] May 25 01:08:24.710: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.238:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6351 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 01:08:24.710: INFO: >>> kubeConfig: /root/.kube/config I0525 01:08:24.749403 7 log.go:172] (0xc005b12370) (0xc000b97d60) Create stream I0525 01:08:24.749438 7 log.go:172] (0xc005b12370) (0xc000b97d60) Stream added, broadcasting: 1 I0525 01:08:24.752242 7 log.go:172] (0xc005b12370) Reply frame received for 1 I0525 01:08:24.752304 7 log.go:172] (0xc005b12370) (0xc00158c0a0) Create stream I0525 01:08:24.752323 7 log.go:172] (0xc005b12370) (0xc00158c0a0) Stream added, broadcasting: 3 I0525 01:08:24.753804 7 log.go:172] (0xc005b12370) Reply frame received for 3 I0525 01:08:24.753861 7 log.go:172] (0xc005b12370) (0xc000b97ea0) Create stream I0525 01:08:24.753881 7 log.go:172] (0xc005b12370) (0xc000b97ea0) Stream added, broadcasting: 5 I0525 01:08:24.755064 7 log.go:172] (0xc005b12370) Reply frame received for 5 I0525 01:08:24.961538 7 log.go:172] (0xc005b12370) Data frame received for 5 I0525 01:08:24.961584 7 log.go:172] (0xc000b97ea0) (5) Data frame handling I0525 01:08:24.961620 7 log.go:172] (0xc005b12370) Data frame received for 3 I0525 01:08:24.961640 7 log.go:172] (0xc00158c0a0) (3) Data frame handling I0525 01:08:24.961657 7 log.go:172] (0xc00158c0a0) (3) Data frame sent I0525 01:08:24.961673 7 log.go:172] (0xc005b12370) Data frame received for 3 I0525 01:08:24.961690 7 log.go:172] (0xc00158c0a0) (3) Data frame handling I0525 01:08:24.962916 7 log.go:172] (0xc005b12370) Data frame received for 1 I0525 01:08:24.962934 7 log.go:172] (0xc000b97d60) (1) Data frame handling I0525 01:08:24.962945 7 log.go:172] (0xc000b97d60) (1) Data frame sent I0525 01:08:24.962957 7 log.go:172] (0xc005b12370) (0xc000b97d60) Stream removed, broadcasting: 1 I0525 01:08:24.963070 7 log.go:172] (0xc005b12370) Go away received I0525 01:08:24.963112 7 log.go:172] (0xc005b12370) (0xc000b97d60) Stream removed, broadcasting: 1 I0525 01:08:24.963145 7 log.go:172] (0xc005b12370) (0xc00158c0a0) Stream removed, broadcasting: 3 I0525 01:08:24.963160 7 log.go:172] (0xc005b12370) (0xc000b97ea0) Stream removed, broadcasting: 5 May 25 01:08:24.963: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:08:24.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6351" for this suite. • [SLOW TEST:24.667 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":270,"skipped":4494,"failed":0} [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:08:24.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:08:41.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4592" for this suite. • [SLOW TEST:16.448 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":288,"completed":271,"skipped":4494,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:08:41.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0525 01:08:42.649044 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 25 01:08:42.649: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:08:42.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2223" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":288,"completed":272,"skipped":4501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:08:42.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod May 25 01:10:43.395: INFO: Successfully updated pod "var-expansion-8ebf6193-fcf3-47ea-a607-da7fb0eed620" STEP: waiting for pod running STEP: deleting the pod gracefully May 25 01:10:45.423: INFO: Deleting pod "var-expansion-8ebf6193-fcf3-47ea-a607-da7fb0eed620" in namespace "var-expansion-9881" May 25 01:10:45.430: INFO: Wait up to 5m0s for pod "var-expansion-8ebf6193-fcf3-47ea-a607-da7fb0eed620" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:11:25.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9881" for this suite. • [SLOW TEST:162.805 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":288,"completed":273,"skipped":4539,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:11:25.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 25 01:11:30.164: INFO: Successfully updated pod "labelsupdate10cda456-43fd-4472-89f2-0242e6bf2729" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:11:32.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1766" for this suite. • [SLOW TEST:6.787 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":274,"skipped":4547,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:11:32.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-2e19e70c-a6a5-42bf-b770-c9fcbd48e0cb STEP: Creating a pod to test consume configMaps May 25 01:11:32.364: INFO: Waiting up to 5m0s for pod "pod-configmaps-4073bfae-b608-422d-b44c-6b9a65696ab7" in namespace "configmap-5290" to be "Succeeded or Failed" May 25 01:11:32.368: INFO: Pod "pod-configmaps-4073bfae-b608-422d-b44c-6b9a65696ab7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.57532ms May 25 01:11:34.373: INFO: Pod "pod-configmaps-4073bfae-b608-422d-b44c-6b9a65696ab7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008180321s May 25 01:11:36.377: INFO: Pod "pod-configmaps-4073bfae-b608-422d-b44c-6b9a65696ab7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012882016s STEP: Saw pod success May 25 01:11:36.377: INFO: Pod "pod-configmaps-4073bfae-b608-422d-b44c-6b9a65696ab7" satisfied condition "Succeeded or Failed" May 25 01:11:36.381: INFO: Trying to get logs from node latest-worker pod pod-configmaps-4073bfae-b608-422d-b44c-6b9a65696ab7 container configmap-volume-test: STEP: delete the pod May 25 01:11:36.434: INFO: Waiting for pod pod-configmaps-4073bfae-b608-422d-b44c-6b9a65696ab7 to disappear May 25 01:11:36.445: INFO: Pod pod-configmaps-4073bfae-b608-422d-b44c-6b9a65696ab7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:11:36.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5290" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":275,"skipped":4580,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:11:36.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-317fb475-eaac-4b17-9e3e-19281d0bd758 in namespace container-probe-1869 May 25 01:11:40.895: INFO: Started pod busybox-317fb475-eaac-4b17-9e3e-19281d0bd758 in namespace container-probe-1869 STEP: checking the pod's current state and verifying that restartCount is present May 25 01:11:40.914: INFO: Initial restart count of pod busybox-317fb475-eaac-4b17-9e3e-19281d0bd758 is 0 May 25 01:12:27.206: INFO: Restart count of pod container-probe-1869/busybox-317fb475-eaac-4b17-9e3e-19281d0bd758 is now 1 (46.291517205s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:12:27.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1869" for this suite. • [SLOW TEST:50.613 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":276,"skipped":4581,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:12:27.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 01:12:27.405: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-665a3d42-c838-407b-9999-4e14da32332f" in namespace "security-context-test-9622" to be "Succeeded or Failed" May 25 01:12:27.417: INFO: Pod "busybox-privileged-false-665a3d42-c838-407b-9999-4e14da32332f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.222725ms May 25 01:12:29.421: INFO: Pod "busybox-privileged-false-665a3d42-c838-407b-9999-4e14da32332f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016265671s May 25 01:12:31.426: INFO: Pod "busybox-privileged-false-665a3d42-c838-407b-9999-4e14da32332f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021045397s May 25 01:12:31.426: INFO: Pod "busybox-privileged-false-665a3d42-c838-407b-9999-4e14da32332f" satisfied condition "Succeeded or Failed" May 25 01:12:31.435: INFO: Got logs for pod "busybox-privileged-false-665a3d42-c838-407b-9999-4e14da32332f": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:12:31.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9622" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":277,"skipped":4590,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:12:31.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 25 01:12:31.750: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 01:12:31.777: INFO: Waiting for terminating namespaces to be deleted... May 25 01:12:31.782: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 25 01:12:31.787: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 25 01:12:31.787: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 25 01:12:31.787: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 25 01:12:31.787: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 25 01:12:31.787: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 25 01:12:31.787: INFO: Container kindnet-cni ready: true, restart count 0 May 25 01:12:31.787: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 25 01:12:31.787: INFO: Container kube-proxy ready: true, restart count 0 May 25 01:12:31.787: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 25 01:12:31.792: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 25 01:12:31.792: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 25 01:12:31.792: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 25 01:12:31.792: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 25 01:12:31.792: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 25 01:12:31.792: INFO: Container kindnet-cni ready: true, restart count 0 May 25 01:12:31.792: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 25 01:12:31.792: INFO: Container kube-proxy ready: true, restart count 0 May 25 01:12:31.792: INFO: busybox-privileged-false-665a3d42-c838-407b-9999-4e14da32332f from security-context-test-9622 started at 2020-05-25 01:12:27 +0000 UTC (1 container statuses recorded) May 25 01:12:31.792: INFO: Container busybox-privileged-false-665a3d42-c838-407b-9999-4e14da32332f ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1612205145d82b19], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.161220514884f15e], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:12:32.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-98" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":288,"completed":278,"skipped":4594,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:12:32.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-db6d1623-664f-48d7-8d78-8c0c3dedb59e STEP: Creating a pod to test consume configMaps May 25 01:12:32.940: INFO: Waiting up to 5m0s for pod "pod-configmaps-06db82e6-b27e-4f26-97d3-a3240490f934" in namespace "configmap-5571" to be "Succeeded or Failed" May 25 01:12:32.958: INFO: Pod "pod-configmaps-06db82e6-b27e-4f26-97d3-a3240490f934": Phase="Pending", Reason="", readiness=false. Elapsed: 17.412292ms May 25 01:12:34.995: INFO: Pod "pod-configmaps-06db82e6-b27e-4f26-97d3-a3240490f934": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054858602s May 25 01:12:37.007: INFO: Pod "pod-configmaps-06db82e6-b27e-4f26-97d3-a3240490f934": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066766766s STEP: Saw pod success May 25 01:12:37.007: INFO: Pod "pod-configmaps-06db82e6-b27e-4f26-97d3-a3240490f934" satisfied condition "Succeeded or Failed" May 25 01:12:37.016: INFO: Trying to get logs from node latest-worker pod pod-configmaps-06db82e6-b27e-4f26-97d3-a3240490f934 container configmap-volume-test: STEP: delete the pod May 25 01:12:37.083: INFO: Waiting for pod pod-configmaps-06db82e6-b27e-4f26-97d3-a3240490f934 to disappear May 25 01:12:37.148: INFO: Pod pod-configmaps-06db82e6-b27e-4f26-97d3-a3240490f934 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:12:37.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5571" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":279,"skipped":4597,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:12:37.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 01:12:37.334: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 25 01:12:42.348: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 25 01:12:42.349: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 25 01:12:44.352: INFO: Creating deployment "test-rollover-deployment" May 25 01:12:44.363: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 25 01:12:46.370: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 25 01:12:46.377: INFO: Ensure that both replica sets have 1 created replica May 25 01:12:46.383: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 25 01:12:46.391: INFO: Updating deployment test-rollover-deployment May 25 01:12:46.391: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 25 01:12:48.420: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 25 01:12:48.427: INFO: Make sure deployment "test-rollover-deployment" is complete May 25 01:12:48.433: INFO: all replica sets need to contain the pod-template-hash label May 25 01:12:48.433: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965966, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 01:12:50.455: INFO: all replica sets need to contain the pod-template-hash label May 25 01:12:50.455: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965970, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 01:12:52.441: INFO: all replica sets need to contain the pod-template-hash label May 25 01:12:52.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965970, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 01:12:54.441: INFO: all replica sets need to contain the pod-template-hash label May 25 01:12:54.441: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965970, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 01:12:56.442: INFO: all replica sets need to contain the pod-template-hash label May 25 01:12:56.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965970, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 01:12:58.442: INFO: all replica sets need to contain the pod-template-hash label May 25 01:12:58.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965970, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 01:13:00.497: INFO: May 25 01:13:00.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965980, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725965964, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 01:13:02.442: INFO: May 25 01:13:02.442: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 25 01:13:02.458: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-8834 /apis/apps/v1/namespaces/deployment-8834/deployments/test-rollover-deployment ef420778-d37b-4f77-9e7f-e801c8644844 7436183 2 2020-05-25 01:12:44 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-25 01:12:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-25 01:13:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003908588 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-25 01:12:44 +0000 UTC,LastTransitionTime:2020-05-25 01:12:44 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-05-25 01:13:00 +0000 UTC,LastTransitionTime:2020-05-25 01:12:44 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 25 01:13:02.462: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-8834 /apis/apps/v1/namespaces/deployment-8834/replicasets/test-rollover-deployment-7c4fd9c879 027c0fbb-82a1-470f-aae5-9e2652aa4afc 7436169 2 2020-05-25 01:12:46 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment ef420778-d37b-4f77-9e7f-e801c8644844 0xc003908bb7 0xc003908bb8}] [] [{kube-controller-manager Update apps/v1 2020-05-25 01:13:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef420778-d37b-4f77-9e7f-e801c8644844\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003908c48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 25 01:13:02.462: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 25 01:13:02.462: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-8834 /apis/apps/v1/namespaces/deployment-8834/replicasets/test-rollover-controller ffd3f566-a57f-4d84-aa2f-90e70053efe3 7436181 2 2020-05-25 01:12:37 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment ef420778-d37b-4f77-9e7f-e801c8644844 0xc0039089a7 0xc0039089a8}] [] [{e2e.test Update apps/v1 2020-05-25 01:12:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-25 01:13:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef420778-d37b-4f77-9e7f-e801c8644844\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003908a48 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 01:13:02.462: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-8834 /apis/apps/v1/namespaces/deployment-8834/replicasets/test-rollover-deployment-5686c4cfd5 784fafbe-fb5b-47d5-a333-129a9deb6451 7436122 2 2020-05-25 01:12:44 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment ef420778-d37b-4f77-9e7f-e801c8644844 0xc003908ab7 0xc003908ab8}] [] [{kube-controller-manager Update apps/v1 2020-05-25 01:12:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef420778-d37b-4f77-9e7f-e801c8644844\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003908b48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 01:13:02.466: INFO: Pod "test-rollover-deployment-7c4fd9c879-bmpxp" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-bmpxp test-rollover-deployment-7c4fd9c879- deployment-8834 /api/v1/namespaces/deployment-8834/pods/test-rollover-deployment-7c4fd9c879-bmpxp c9335263-6ef0-486b-a431-d44d3e52ddf4 7436139 0 2020-05-25 01:12:46 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 027c0fbb-82a1-470f-aae5-9e2652aa4afc 0xc0039b5687 0xc0039b5688}] [] [{kube-controller-manager Update v1 2020-05-25 01:12:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"027c0fbb-82a1-470f-aae5-9e2652aa4afc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 01:12:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.243\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v286r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v286r,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v286r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:12:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:12:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:12:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:12:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.243,StartTime:2020-05-25 01:12:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 01:12:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://7ac3e3f43ff4533fc5b70536864cdba38087442cc19ea4b2a3b187f2e6075ae8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.243,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:13:02.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8834" for this suite. • [SLOW TEST:25.319 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":288,"completed":280,"skipped":4638,"failed":0} S ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:13:02.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-370 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-370 to expose endpoints map[] May 25 01:13:02.630: INFO: successfully validated that service multi-endpoint-test in namespace services-370 exposes endpoints map[] (23.627311ms elapsed) STEP: Creating pod pod1 in namespace services-370 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-370 to expose endpoints map[pod1:[100]] May 25 01:13:05.700: INFO: successfully validated that service multi-endpoint-test in namespace services-370 exposes endpoints map[pod1:[100]] (3.05750909s elapsed) STEP: Creating pod pod2 in namespace services-370 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-370 to expose endpoints map[pod1:[100] pod2:[101]] May 25 01:13:10.096: INFO: successfully validated that service multi-endpoint-test in namespace services-370 exposes endpoints map[pod1:[100] pod2:[101]] (4.387663375s elapsed) STEP: Deleting pod pod1 in namespace services-370 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-370 to expose endpoints map[pod2:[101]] May 25 01:13:11.188: INFO: successfully validated that service multi-endpoint-test in namespace services-370 exposes endpoints map[pod2:[101]] (1.087859886s elapsed) STEP: Deleting pod pod2 in namespace services-370 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-370 to expose endpoints map[] May 25 01:13:12.222: INFO: successfully validated that service multi-endpoint-test in namespace services-370 exposes endpoints map[] (1.028883971s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:13:12.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-370" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:9.942 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":288,"completed":281,"skipped":4639,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:13:12.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3032 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-3032 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3032 May 25 01:13:12.562: INFO: Found 0 stateful pods, waiting for 1 May 25 01:13:22.567: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 25 01:13:22.571: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 01:13:25.820: INFO: stderr: "I0525 01:13:25.703171 4463 log.go:172] (0xc000dca2c0) (0xc0006ecbe0) Create stream\nI0525 01:13:25.703211 4463 log.go:172] (0xc000dca2c0) (0xc0006ecbe0) Stream added, broadcasting: 1\nI0525 01:13:25.709548 4463 log.go:172] (0xc000dca2c0) Reply frame received for 1\nI0525 01:13:25.709599 4463 log.go:172] (0xc000dca2c0) (0xc00071ce60) Create stream\nI0525 01:13:25.709615 4463 log.go:172] (0xc000dca2c0) (0xc00071ce60) Stream added, broadcasting: 3\nI0525 01:13:25.711196 4463 log.go:172] (0xc000dca2c0) Reply frame received for 3\nI0525 01:13:25.711238 4463 log.go:172] (0xc000dca2c0) (0xc00071d400) Create stream\nI0525 01:13:25.711257 4463 log.go:172] (0xc000dca2c0) (0xc00071d400) Stream added, broadcasting: 5\nI0525 01:13:25.712802 4463 log.go:172] (0xc000dca2c0) Reply frame received for 5\nI0525 01:13:25.774140 4463 log.go:172] (0xc000dca2c0) Data frame received for 5\nI0525 01:13:25.774173 4463 log.go:172] (0xc00071d400) (5) Data frame handling\nI0525 01:13:25.774196 4463 log.go:172] (0xc00071d400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 01:13:25.811320 4463 log.go:172] (0xc000dca2c0) Data frame received for 3\nI0525 01:13:25.811346 4463 log.go:172] (0xc00071ce60) (3) Data frame handling\nI0525 01:13:25.811366 4463 log.go:172] (0xc00071ce60) (3) Data frame sent\nI0525 01:13:25.811653 4463 log.go:172] (0xc000dca2c0) Data frame received for 3\nI0525 01:13:25.811696 4463 log.go:172] (0xc00071ce60) (3) Data frame handling\nI0525 01:13:25.811812 4463 log.go:172] (0xc000dca2c0) Data frame received for 5\nI0525 01:13:25.811875 4463 log.go:172] (0xc00071d400) (5) Data frame handling\nI0525 01:13:25.813652 4463 log.go:172] (0xc000dca2c0) Data frame received for 1\nI0525 01:13:25.813674 4463 log.go:172] (0xc0006ecbe0) (1) Data frame handling\nI0525 01:13:25.813684 4463 log.go:172] (0xc0006ecbe0) (1) Data frame sent\nI0525 01:13:25.813696 4463 log.go:172] (0xc000dca2c0) (0xc0006ecbe0) Stream removed, broadcasting: 1\nI0525 01:13:25.813750 4463 log.go:172] (0xc000dca2c0) Go away received\nI0525 01:13:25.814022 4463 log.go:172] (0xc000dca2c0) (0xc0006ecbe0) Stream removed, broadcasting: 1\nI0525 01:13:25.814033 4463 log.go:172] (0xc000dca2c0) (0xc00071ce60) Stream removed, broadcasting: 3\nI0525 01:13:25.814039 4463 log.go:172] (0xc000dca2c0) (0xc00071d400) Stream removed, broadcasting: 5\n" May 25 01:13:25.821: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 01:13:25.821: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 01:13:25.826: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 25 01:13:35.831: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 25 01:13:35.832: INFO: Waiting for statefulset status.replicas updated to 0 May 25 01:13:35.852: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999721s May 25 01:13:36.856: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.991463691s May 25 01:13:37.860: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.987208909s May 25 01:13:38.865: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.983023637s May 25 01:13:39.870: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.977964986s May 25 01:13:41.099: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.972778251s May 25 01:13:42.103: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.744659644s May 25 01:13:43.107: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.74054391s May 25 01:13:44.111: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.736072851s May 25 01:13:45.116: INFO: Verifying statefulset ss doesn't scale past 1 for another 732.075784ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3032 May 25 01:13:46.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 01:13:46.379: INFO: stderr: "I0525 01:13:46.277286 4498 log.go:172] (0xc00063c8f0) (0xc00025af00) Create stream\nI0525 01:13:46.277356 4498 log.go:172] (0xc00063c8f0) (0xc00025af00) Stream added, broadcasting: 1\nI0525 01:13:46.280211 4498 log.go:172] (0xc00063c8f0) Reply frame received for 1\nI0525 01:13:46.280254 4498 log.go:172] (0xc00063c8f0) (0xc000308460) Create stream\nI0525 01:13:46.280271 4498 log.go:172] (0xc00063c8f0) (0xc000308460) Stream added, broadcasting: 3\nI0525 01:13:46.281694 4498 log.go:172] (0xc00063c8f0) Reply frame received for 3\nI0525 01:13:46.281735 4498 log.go:172] (0xc00063c8f0) (0xc00025bd60) Create stream\nI0525 01:13:46.281744 4498 log.go:172] (0xc00063c8f0) (0xc00025bd60) Stream added, broadcasting: 5\nI0525 01:13:46.282595 4498 log.go:172] (0xc00063c8f0) Reply frame received for 5\nI0525 01:13:46.370388 4498 log.go:172] (0xc00063c8f0) Data frame received for 5\nI0525 01:13:46.370416 4498 log.go:172] (0xc00025bd60) (5) Data frame handling\nI0525 01:13:46.370428 4498 log.go:172] (0xc00025bd60) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0525 01:13:46.370468 4498 log.go:172] (0xc00063c8f0) Data frame received for 3\nI0525 01:13:46.370513 4498 log.go:172] (0xc000308460) (3) Data frame handling\nI0525 01:13:46.370648 4498 log.go:172] (0xc00063c8f0) Data frame received for 5\nI0525 01:13:46.370661 4498 log.go:172] (0xc00025bd60) (5) Data frame handling\nI0525 01:13:46.370733 4498 log.go:172] (0xc000308460) (3) Data frame sent\nI0525 01:13:46.370759 4498 log.go:172] (0xc00063c8f0) Data frame received for 3\nI0525 01:13:46.370773 4498 log.go:172] (0xc000308460) (3) Data frame handling\nI0525 01:13:46.372125 4498 log.go:172] (0xc00063c8f0) Data frame received for 1\nI0525 01:13:46.372138 4498 log.go:172] (0xc00025af00) (1) Data frame handling\nI0525 01:13:46.372144 4498 log.go:172] (0xc00025af00) (1) Data frame sent\nI0525 01:13:46.372161 4498 log.go:172] (0xc00063c8f0) (0xc00025af00) Stream removed, broadcasting: 1\nI0525 01:13:46.372171 4498 log.go:172] (0xc00063c8f0) Go away received\nI0525 01:13:46.372542 4498 log.go:172] (0xc00063c8f0) (0xc00025af00) Stream removed, broadcasting: 1\nI0525 01:13:46.372564 4498 log.go:172] (0xc00063c8f0) (0xc000308460) Stream removed, broadcasting: 3\nI0525 01:13:46.372576 4498 log.go:172] (0xc00063c8f0) (0xc00025bd60) Stream removed, broadcasting: 5\n" May 25 01:13:46.380: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 01:13:46.380: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 01:13:46.384: INFO: Found 1 stateful pods, waiting for 3 May 25 01:13:56.389: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 25 01:13:56.389: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 25 01:13:56.389: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 25 01:13:56.401: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 01:13:56.643: INFO: stderr: "I0525 01:13:56.543097 4521 log.go:172] (0xc000abb6b0) (0xc0009c0460) Create stream\nI0525 01:13:56.543172 4521 log.go:172] (0xc000abb6b0) (0xc0009c0460) Stream added, broadcasting: 1\nI0525 01:13:56.548348 4521 log.go:172] (0xc000abb6b0) Reply frame received for 1\nI0525 01:13:56.548377 4521 log.go:172] (0xc000abb6b0) (0xc00073af00) Create stream\nI0525 01:13:56.548386 4521 log.go:172] (0xc000abb6b0) (0xc00073af00) Stream added, broadcasting: 3\nI0525 01:13:56.549689 4521 log.go:172] (0xc000abb6b0) Reply frame received for 3\nI0525 01:13:56.549753 4521 log.go:172] (0xc000abb6b0) (0xc00066a5a0) Create stream\nI0525 01:13:56.549787 4521 log.go:172] (0xc000abb6b0) (0xc00066a5a0) Stream added, broadcasting: 5\nI0525 01:13:56.550640 4521 log.go:172] (0xc000abb6b0) Reply frame received for 5\nI0525 01:13:56.636355 4521 log.go:172] (0xc000abb6b0) Data frame received for 5\nI0525 01:13:56.636402 4521 log.go:172] (0xc00066a5a0) (5) Data frame handling\nI0525 01:13:56.636431 4521 log.go:172] (0xc00066a5a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 01:13:56.636455 4521 log.go:172] (0xc000abb6b0) Data frame received for 5\nI0525 01:13:56.636474 4521 log.go:172] (0xc00066a5a0) (5) Data frame handling\nI0525 01:13:56.636505 4521 log.go:172] (0xc000abb6b0) Data frame received for 3\nI0525 01:13:56.636520 4521 log.go:172] (0xc00073af00) (3) Data frame handling\nI0525 01:13:56.636532 4521 log.go:172] (0xc00073af00) (3) Data frame sent\nI0525 01:13:56.636544 4521 log.go:172] (0xc000abb6b0) Data frame received for 3\nI0525 01:13:56.636556 4521 log.go:172] (0xc00073af00) (3) Data frame handling\nI0525 01:13:56.638371 4521 log.go:172] (0xc000abb6b0) Data frame received for 1\nI0525 01:13:56.638394 4521 log.go:172] (0xc0009c0460) (1) Data frame handling\nI0525 01:13:56.638405 4521 log.go:172] (0xc0009c0460) (1) Data frame sent\nI0525 01:13:56.638416 4521 log.go:172] (0xc000abb6b0) (0xc0009c0460) Stream removed, broadcasting: 1\nI0525 01:13:56.638456 4521 log.go:172] (0xc000abb6b0) Go away received\nI0525 01:13:56.638742 4521 log.go:172] (0xc000abb6b0) (0xc0009c0460) Stream removed, broadcasting: 1\nI0525 01:13:56.638758 4521 log.go:172] (0xc000abb6b0) (0xc00073af00) Stream removed, broadcasting: 3\nI0525 01:13:56.638767 4521 log.go:172] (0xc000abb6b0) (0xc00066a5a0) Stream removed, broadcasting: 5\n" May 25 01:13:56.643: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 01:13:56.643: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 01:13:56.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 01:13:56.940: INFO: stderr: "I0525 01:13:56.839136 4541 log.go:172] (0xc000b4cdc0) (0xc00037b360) Create stream\nI0525 01:13:56.839194 4541 log.go:172] (0xc000b4cdc0) (0xc00037b360) Stream added, broadcasting: 1\nI0525 01:13:56.841864 4541 log.go:172] (0xc000b4cdc0) Reply frame received for 1\nI0525 01:13:56.841920 4541 log.go:172] (0xc000b4cdc0) (0xc00053e140) Create stream\nI0525 01:13:56.841946 4541 log.go:172] (0xc000b4cdc0) (0xc00053e140) Stream added, broadcasting: 3\nI0525 01:13:56.843113 4541 log.go:172] (0xc000b4cdc0) Reply frame received for 3\nI0525 01:13:56.843152 4541 log.go:172] (0xc000b4cdc0) (0xc000306000) Create stream\nI0525 01:13:56.843168 4541 log.go:172] (0xc000b4cdc0) (0xc000306000) Stream added, broadcasting: 5\nI0525 01:13:56.844122 4541 log.go:172] (0xc000b4cdc0) Reply frame received for 5\nI0525 01:13:56.906731 4541 log.go:172] (0xc000b4cdc0) Data frame received for 5\nI0525 01:13:56.906759 4541 log.go:172] (0xc000306000) (5) Data frame handling\nI0525 01:13:56.906777 4541 log.go:172] (0xc000306000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 01:13:56.931869 4541 log.go:172] (0xc000b4cdc0) Data frame received for 3\nI0525 01:13:56.931892 4541 log.go:172] (0xc00053e140) (3) Data frame handling\nI0525 01:13:56.931907 4541 log.go:172] (0xc00053e140) (3) Data frame sent\nI0525 01:13:56.931917 4541 log.go:172] (0xc000b4cdc0) Data frame received for 3\nI0525 01:13:56.931927 4541 log.go:172] (0xc00053e140) (3) Data frame handling\nI0525 01:13:56.932418 4541 log.go:172] (0xc000b4cdc0) Data frame received for 5\nI0525 01:13:56.932446 4541 log.go:172] (0xc000306000) (5) Data frame handling\nI0525 01:13:56.933806 4541 log.go:172] (0xc000b4cdc0) Data frame received for 1\nI0525 01:13:56.933844 4541 log.go:172] (0xc00037b360) (1) Data frame handling\nI0525 01:13:56.933867 4541 log.go:172] (0xc00037b360) (1) Data frame sent\nI0525 01:13:56.933882 4541 log.go:172] (0xc000b4cdc0) (0xc00037b360) Stream removed, broadcasting: 1\nI0525 01:13:56.933906 4541 log.go:172] (0xc000b4cdc0) Go away received\nI0525 01:13:56.934253 4541 log.go:172] (0xc000b4cdc0) (0xc00037b360) Stream removed, broadcasting: 1\nI0525 01:13:56.934270 4541 log.go:172] (0xc000b4cdc0) (0xc00053e140) Stream removed, broadcasting: 3\nI0525 01:13:56.934278 4541 log.go:172] (0xc000b4cdc0) (0xc000306000) Stream removed, broadcasting: 5\n" May 25 01:13:56.940: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 01:13:56.940: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 01:13:56.940: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 01:13:57.224: INFO: stderr: "I0525 01:13:57.082685 4561 log.go:172] (0xc00003ad10) (0xc00066c460) Create stream\nI0525 01:13:57.082741 4561 log.go:172] (0xc00003ad10) (0xc00066c460) Stream added, broadcasting: 1\nI0525 01:13:57.085353 4561 log.go:172] (0xc00003ad10) Reply frame received for 1\nI0525 01:13:57.085412 4561 log.go:172] (0xc00003ad10) (0xc000139900) Create stream\nI0525 01:13:57.085439 4561 log.go:172] (0xc00003ad10) (0xc000139900) Stream added, broadcasting: 3\nI0525 01:13:57.086672 4561 log.go:172] (0xc00003ad10) Reply frame received for 3\nI0525 01:13:57.086736 4561 log.go:172] (0xc00003ad10) (0xc00066cc80) Create stream\nI0525 01:13:57.086755 4561 log.go:172] (0xc00003ad10) (0xc00066cc80) Stream added, broadcasting: 5\nI0525 01:13:57.087567 4561 log.go:172] (0xc00003ad10) Reply frame received for 5\nI0525 01:13:57.149738 4561 log.go:172] (0xc00003ad10) Data frame received for 5\nI0525 01:13:57.149761 4561 log.go:172] (0xc00066cc80) (5) Data frame handling\nI0525 01:13:57.149773 4561 log.go:172] (0xc00066cc80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 01:13:57.215973 4561 log.go:172] (0xc00003ad10) Data frame received for 3\nI0525 01:13:57.216005 4561 log.go:172] (0xc000139900) (3) Data frame handling\nI0525 01:13:57.216013 4561 log.go:172] (0xc000139900) (3) Data frame sent\nI0525 01:13:57.216018 4561 log.go:172] (0xc00003ad10) Data frame received for 3\nI0525 01:13:57.216022 4561 log.go:172] (0xc000139900) (3) Data frame handling\nI0525 01:13:57.216242 4561 log.go:172] (0xc00003ad10) Data frame received for 5\nI0525 01:13:57.216362 4561 log.go:172] (0xc00066cc80) (5) Data frame handling\nI0525 01:13:57.218024 4561 log.go:172] (0xc00003ad10) Data frame received for 1\nI0525 01:13:57.218062 4561 log.go:172] (0xc00066c460) (1) Data frame handling\nI0525 01:13:57.218093 4561 log.go:172] (0xc00066c460) (1) Data frame sent\nI0525 01:13:57.218123 4561 log.go:172] (0xc00003ad10) (0xc00066c460) Stream removed, broadcasting: 1\nI0525 01:13:57.218148 4561 log.go:172] (0xc00003ad10) Go away received\nI0525 01:13:57.218550 4561 log.go:172] (0xc00003ad10) (0xc00066c460) Stream removed, broadcasting: 1\nI0525 01:13:57.218583 4561 log.go:172] (0xc00003ad10) (0xc000139900) Stream removed, broadcasting: 3\nI0525 01:13:57.218603 4561 log.go:172] (0xc00003ad10) (0xc00066cc80) Stream removed, broadcasting: 5\n" May 25 01:13:57.224: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 01:13:57.225: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 01:13:57.225: INFO: Waiting for statefulset status.replicas updated to 0 May 25 01:13:57.242: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 25 01:14:07.314: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 25 01:14:07.314: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 25 01:14:07.314: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 25 01:14:07.363: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999303s May 25 01:14:08.383: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.959198566s May 25 01:14:09.388: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.93972123s May 25 01:14:10.396: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.934266599s May 25 01:14:11.400: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.926736923s May 25 01:14:12.415: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.921950663s May 25 01:14:13.420: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.907484486s May 25 01:14:14.424: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.902512149s May 25 01:14:15.430: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.898376854s May 25 01:14:16.434: INFO: Verifying statefulset ss doesn't scale past 3 for another 892.907617ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3032 May 25 01:14:17.438: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 01:14:17.640: INFO: stderr: "I0525 01:14:17.565385 4583 log.go:172] (0xc000ae14a0) (0xc000b825a0) Create stream\nI0525 01:14:17.565478 4583 log.go:172] (0xc000ae14a0) (0xc000b825a0) Stream added, broadcasting: 1\nI0525 01:14:17.571171 4583 log.go:172] (0xc000ae14a0) Reply frame received for 1\nI0525 01:14:17.571218 4583 log.go:172] (0xc000ae14a0) (0xc0004d85a0) Create stream\nI0525 01:14:17.571233 4583 log.go:172] (0xc000ae14a0) (0xc0004d85a0) Stream added, broadcasting: 3\nI0525 01:14:17.572265 4583 log.go:172] (0xc000ae14a0) Reply frame received for 3\nI0525 01:14:17.572304 4583 log.go:172] (0xc000ae14a0) (0xc0004bc280) Create stream\nI0525 01:14:17.572317 4583 log.go:172] (0xc000ae14a0) (0xc0004bc280) Stream added, broadcasting: 5\nI0525 01:14:17.573076 4583 log.go:172] (0xc000ae14a0) Reply frame received for 5\nI0525 01:14:17.632227 4583 log.go:172] (0xc000ae14a0) Data frame received for 3\nI0525 01:14:17.632248 4583 log.go:172] (0xc0004d85a0) (3) Data frame handling\nI0525 01:14:17.632254 4583 log.go:172] (0xc0004d85a0) (3) Data frame sent\nI0525 01:14:17.632259 4583 log.go:172] (0xc000ae14a0) Data frame received for 3\nI0525 01:14:17.632264 4583 log.go:172] (0xc0004d85a0) (3) Data frame handling\nI0525 01:14:17.632285 4583 log.go:172] (0xc000ae14a0) Data frame received for 5\nI0525 01:14:17.632302 4583 log.go:172] (0xc0004bc280) (5) Data frame handling\nI0525 01:14:17.632319 4583 log.go:172] (0xc0004bc280) (5) Data frame sent\nI0525 01:14:17.632328 4583 log.go:172] (0xc000ae14a0) Data frame received for 5\nI0525 01:14:17.632337 4583 log.go:172] (0xc0004bc280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0525 01:14:17.633898 4583 log.go:172] (0xc000ae14a0) Data frame received for 1\nI0525 01:14:17.633912 4583 log.go:172] (0xc000b825a0) (1) Data frame handling\nI0525 01:14:17.633921 4583 log.go:172] (0xc000b825a0) (1) Data frame sent\nI0525 01:14:17.633995 4583 log.go:172] (0xc000ae14a0) (0xc000b825a0) Stream removed, broadcasting: 1\nI0525 01:14:17.634052 4583 log.go:172] (0xc000ae14a0) Go away received\nI0525 01:14:17.634493 4583 log.go:172] (0xc000ae14a0) (0xc000b825a0) Stream removed, broadcasting: 1\nI0525 01:14:17.634514 4583 log.go:172] (0xc000ae14a0) (0xc0004d85a0) Stream removed, broadcasting: 3\nI0525 01:14:17.634529 4583 log.go:172] (0xc000ae14a0) (0xc0004bc280) Stream removed, broadcasting: 5\n" May 25 01:14:17.640: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 01:14:17.640: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 01:14:17.640: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 01:14:17.847: INFO: stderr: "I0525 01:14:17.777536 4603 log.go:172] (0xc000ae8000) (0xc000558280) Create stream\nI0525 01:14:17.777592 4603 log.go:172] (0xc000ae8000) (0xc000558280) Stream added, broadcasting: 1\nI0525 01:14:17.779957 4603 log.go:172] (0xc000ae8000) Reply frame received for 1\nI0525 01:14:17.779993 4603 log.go:172] (0xc000ae8000) (0xc000542e60) Create stream\nI0525 01:14:17.780005 4603 log.go:172] (0xc000ae8000) (0xc000542e60) Stream added, broadcasting: 3\nI0525 01:14:17.780893 4603 log.go:172] (0xc000ae8000) Reply frame received for 3\nI0525 01:14:17.780931 4603 log.go:172] (0xc000ae8000) (0xc000478140) Create stream\nI0525 01:14:17.780943 4603 log.go:172] (0xc000ae8000) (0xc000478140) Stream added, broadcasting: 5\nI0525 01:14:17.781831 4603 log.go:172] (0xc000ae8000) Reply frame received for 5\nI0525 01:14:17.839742 4603 log.go:172] (0xc000ae8000) Data frame received for 5\nI0525 01:14:17.839800 4603 log.go:172] (0xc000478140) (5) Data frame handling\nI0525 01:14:17.839818 4603 log.go:172] (0xc000478140) (5) Data frame sent\nI0525 01:14:17.839831 4603 log.go:172] (0xc000ae8000) Data frame received for 5\nI0525 01:14:17.839841 4603 log.go:172] (0xc000478140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0525 01:14:17.839882 4603 log.go:172] (0xc000ae8000) Data frame received for 3\nI0525 01:14:17.839907 4603 log.go:172] (0xc000542e60) (3) Data frame handling\nI0525 01:14:17.839927 4603 log.go:172] (0xc000542e60) (3) Data frame sent\nI0525 01:14:17.839947 4603 log.go:172] (0xc000ae8000) Data frame received for 3\nI0525 01:14:17.839962 4603 log.go:172] (0xc000542e60) (3) Data frame handling\nI0525 01:14:17.841478 4603 log.go:172] (0xc000ae8000) Data frame received for 1\nI0525 01:14:17.841499 4603 log.go:172] (0xc000558280) (1) Data frame handling\nI0525 01:14:17.841514 4603 log.go:172] (0xc000558280) (1) Data frame sent\nI0525 01:14:17.841523 4603 log.go:172] (0xc000ae8000) (0xc000558280) Stream removed, broadcasting: 1\nI0525 01:14:17.841539 4603 log.go:172] (0xc000ae8000) Go away received\nI0525 01:14:17.841985 4603 log.go:172] (0xc000ae8000) (0xc000558280) Stream removed, broadcasting: 1\nI0525 01:14:17.842008 4603 log.go:172] (0xc000ae8000) (0xc000542e60) Stream removed, broadcasting: 3\nI0525 01:14:17.842020 4603 log.go:172] (0xc000ae8000) (0xc000478140) Stream removed, broadcasting: 5\n" May 25 01:14:17.847: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 01:14:17.847: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 01:14:17.847: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 01:14:18.074: INFO: stderr: "I0525 01:14:17.992945 4623 log.go:172] (0xc00003a0b0) (0xc00044cd20) Create stream\nI0525 01:14:17.993021 4623 log.go:172] (0xc00003a0b0) (0xc00044cd20) Stream added, broadcasting: 1\nI0525 01:14:17.995062 4623 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0525 01:14:17.995106 4623 log.go:172] (0xc00003a0b0) (0xc0003d1180) Create stream\nI0525 01:14:17.995122 4623 log.go:172] (0xc00003a0b0) (0xc0003d1180) Stream added, broadcasting: 3\nI0525 01:14:17.996101 4623 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0525 01:14:17.996138 4623 log.go:172] (0xc00003a0b0) (0xc00015f0e0) Create stream\nI0525 01:14:17.996156 4623 log.go:172] (0xc00003a0b0) (0xc00015f0e0) Stream added, broadcasting: 5\nI0525 01:14:17.997093 4623 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0525 01:14:18.067736 4623 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 01:14:18.067781 4623 log.go:172] (0xc0003d1180) (3) Data frame handling\nI0525 01:14:18.067798 4623 log.go:172] (0xc0003d1180) (3) Data frame sent\nI0525 01:14:18.067810 4623 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0525 01:14:18.067818 4623 log.go:172] (0xc0003d1180) (3) Data frame handling\nI0525 01:14:18.067858 4623 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0525 01:14:18.067871 4623 log.go:172] (0xc00015f0e0) (5) Data frame handling\nI0525 01:14:18.067891 4623 log.go:172] (0xc00015f0e0) (5) Data frame sent\nI0525 01:14:18.067919 4623 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0525 01:14:18.067944 4623 log.go:172] (0xc00015f0e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0525 01:14:18.069727 4623 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0525 01:14:18.069756 4623 log.go:172] (0xc00044cd20) (1) Data frame handling\nI0525 01:14:18.069791 4623 log.go:172] (0xc00044cd20) (1) Data frame sent\nI0525 01:14:18.069815 4623 log.go:172] (0xc00003a0b0) (0xc00044cd20) Stream removed, broadcasting: 1\nI0525 01:14:18.069835 4623 log.go:172] (0xc00003a0b0) Go away received\nI0525 01:14:18.070190 4623 log.go:172] (0xc00003a0b0) (0xc00044cd20) Stream removed, broadcasting: 1\nI0525 01:14:18.070224 4623 log.go:172] (0xc00003a0b0) (0xc0003d1180) Stream removed, broadcasting: 3\nI0525 01:14:18.070247 4623 log.go:172] (0xc00003a0b0) (0xc00015f0e0) Stream removed, broadcasting: 5\n" May 25 01:14:18.074: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 01:14:18.074: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 01:14:18.074: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 25 01:14:38.094: INFO: Deleting all statefulset in ns statefulset-3032 May 25 01:14:38.097: INFO: Scaling statefulset ss to 0 May 25 01:14:38.105: INFO: Waiting for statefulset status.replicas updated to 0 May 25 01:14:38.107: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:14:38.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3032" for this suite. • [SLOW TEST:85.770 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":288,"completed":282,"skipped":4641,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:14:38.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 01:14:38.788: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 01:14:40.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725966078, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725966078, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725966078, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725966078, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 01:14:43.994: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:14:44.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6284" for this suite. STEP: Destroying namespace "webhook-6284-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.073 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":288,"completed":283,"skipped":4683,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:14:44.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:15:01.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4077" for this suite. • [SLOW TEST:17.137 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":288,"completed":284,"skipped":4704,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:15:01.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 25 01:15:06.639: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:15:07.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3931" for this suite. • [SLOW TEST:6.258 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":288,"completed":285,"skipped":4746,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:15:07.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 01:15:08.773: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 01:15:10.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725966109, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725966109, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725966109, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725966108, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 01:15:12.912: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725966109, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725966109, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725966109, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725966108, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 01:15:15.943: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:15:16.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3424" for this suite. STEP: Destroying namespace "webhook-3424-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.479 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":288,"completed":286,"skipped":4775,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:15:16.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 01:15:16.279: INFO: Creating deployment "webserver-deployment" May 25 01:15:16.283: INFO: Waiting for observed generation 1 May 25 01:15:18.303: INFO: Waiting for all required pods to come up May 25 01:15:18.308: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 25 01:15:30.318: INFO: Waiting for deployment "webserver-deployment" to complete May 25 01:15:30.323: INFO: Updating deployment "webserver-deployment" with a non-existent image May 25 01:15:30.331: INFO: Updating deployment webserver-deployment May 25 01:15:30.331: INFO: Waiting for observed generation 2 May 25 01:15:32.342: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 25 01:15:32.344: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 25 01:15:32.506: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 25 01:15:32.516: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 25 01:15:32.516: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 25 01:15:32.518: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 25 01:15:32.523: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 25 01:15:32.523: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 25 01:15:32.531: INFO: Updating deployment webserver-deployment May 25 01:15:32.531: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 25 01:15:32.991: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 25 01:15:33.005: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 25 01:15:33.258: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-3008 /apis/apps/v1/namespaces/deployment-3008/deployments/webserver-deployment e3082f8e-5ffe-43b7-bc9f-5b44e08e0a44 7437321 3 2020-05-25 01:15:16 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-25 01:15:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004467fc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-25 01:15:30 +0000 UTC,LastTransitionTime:2020-05-25 01:15:16 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-25 01:15:32 +0000 UTC,LastTransitionTime:2020-05-25 01:15:32 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 25 01:15:33.410: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-3008 /apis/apps/v1/namespaces/deployment-3008/replicasets/webserver-deployment-6676bcd6d4 f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb 7437378 3 2020-05-25 01:15:30 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment e3082f8e-5ffe-43b7-bc9f-5b44e08e0a44 0xc002bba6b7 0xc002bba6b8}] [] [{kube-controller-manager Update apps/v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3082f8e-5ffe-43b7-bc9f-5b44e08e0a44\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002bba738 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 01:15:33.410: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 25 01:15:33.410: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-3008 /apis/apps/v1/namespaces/deployment-3008/replicasets/webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 7437369 3 2020-05-25 01:15:16 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment e3082f8e-5ffe-43b7-bc9f-5b44e08e0a44 0xc002bba797 0xc002bba798}] [] [{kube-controller-manager Update apps/v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3082f8e-5ffe-43b7-bc9f-5b44e08e0a44\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002bba808 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 25 01:15:33.475: INFO: Pod "webserver-deployment-6676bcd6d4-2zmwz" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2zmwz webserver-deployment-6676bcd6d4- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-6676bcd6d4-2zmwz 4bc8374a-a9b0-4db7-b4ca-524fe564ef77 7437335 0 2020-05-25 01:15:33 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb 0xc002bbad67 0xc002bbad68}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.475: INFO: Pod "webserver-deployment-6676bcd6d4-4g5qk" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4g5qk webserver-deployment-6676bcd6d4- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-6676bcd6d4-4g5qk fef717b5-408d-4eb9-9066-02f3eb4ff1a4 7437329 0 2020-05-25 01:15:33 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb 0xc002bbaeb7 0xc002bbaeb8}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.475: INFO: Pod "webserver-deployment-6676bcd6d4-5856f" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-5856f webserver-deployment-6676bcd6d4- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-6676bcd6d4-5856f 128521b5-9373-4d4b-8fa3-bc25f4ad7579 7437287 0 2020-05-25 01:15:30 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb 0xc002bbaff7 0xc002bbaff8}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 01:15:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-25 01:15:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.476: INFO: Pod "webserver-deployment-6676bcd6d4-c5r4s" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-c5r4s webserver-deployment-6676bcd6d4- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-6676bcd6d4-c5r4s ccd94311-9b24-472c-b0b3-d65abbd76ddc 7437266 0 2020-05-25 01:15:30 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb 0xc002bbb1c7 0xc002bbb1c8}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 01:15:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-25 01:15:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.476: INFO: Pod "webserver-deployment-6676bcd6d4-drvkd" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-drvkd webserver-deployment-6676bcd6d4- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-6676bcd6d4-drvkd f8b500df-54c0-49aa-8ada-c905e2253030 7437346 0 2020-05-25 01:15:33 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb 0xc002bbb387 0xc002bbb388}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.476: INFO: Pod "webserver-deployment-6676bcd6d4-dxftt" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-dxftt webserver-deployment-6676bcd6d4- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-6676bcd6d4-dxftt 8b84df1c-28f5-4293-982b-619addeff22f 7437275 0 2020-05-25 01:15:30 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb 0xc002bbb4c7 0xc002bbb4c8}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 01:15:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-25 01:15:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.476: INFO: Pod "webserver-deployment-6676bcd6d4-l6j4s" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-l6j4s webserver-deployment-6676bcd6d4- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-6676bcd6d4-l6j4s acba5e5b-9d94-4ee9-b548-03f8049fc3e0 7437367 0 2020-05-25 01:15:33 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb 0xc002bbb697 0xc002bbb698}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.476: INFO: Pod "webserver-deployment-6676bcd6d4-lmr6r" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-lmr6r webserver-deployment-6676bcd6d4- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-6676bcd6d4-lmr6r 39ecc1c6-ae72-449f-a198-6debebd4850e 7437293 0 2020-05-25 01:15:30 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb 0xc002bbb7d7 0xc002bbb7d8}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 01:15:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-25 01:15:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.476: INFO: Pod "webserver-deployment-6676bcd6d4-pp4fk" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-pp4fk webserver-deployment-6676bcd6d4- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-6676bcd6d4-pp4fk 04ad475f-aef7-4fd7-80d4-63814aa6f307 7437357 0 2020-05-25 01:15:33 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb 0xc002bbba07 0xc002bbba08}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.477: INFO: Pod "webserver-deployment-6676bcd6d4-sszlr" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-sszlr webserver-deployment-6676bcd6d4- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-6676bcd6d4-sszlr 9b22676d-8235-485c-865a-af93cd57f086 7437348 0 2020-05-25 01:15:33 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb 0xc002bbbd47 0xc002bbbd48}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.477: INFO: Pod "webserver-deployment-6676bcd6d4-wk8j4" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-wk8j4 webserver-deployment-6676bcd6d4- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-6676bcd6d4-wk8j4 d862bfdf-eb7c-4dd2-878e-2fc8c1b0bbc9 7437296 0 2020-05-25 01:15:30 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb 0xc002bbbf07 0xc002bbbf08}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 01:15:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-25 01:15:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.477: INFO: Pod "webserver-deployment-6676bcd6d4-wvblm" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-wvblm webserver-deployment-6676bcd6d4- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-6676bcd6d4-wvblm 15cd3216-333a-4445-92c5-d3b6eec1111f 7437350 0 2020-05-25 01:15:33 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb 0xc0009b6177 0xc0009b6178}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.477: INFO: Pod "webserver-deployment-6676bcd6d4-zjz86" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-zjz86 webserver-deployment-6676bcd6d4- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-6676bcd6d4-zjz86 b3977ed8-7720-4058-aa8c-3b437ef8180d 7437379 0 2020-05-25 01:15:32 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb 0xc0009b62b7 0xc0009b62b8}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f32ad0a8-1b1b-4539-b650-b8d9b9f78ffb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-25 01:15:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.478: INFO: Pod "webserver-deployment-84855cf797-6bg7r" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-6bg7r webserver-deployment-84855cf797- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-84855cf797-6bg7r f8e2b20c-8844-4e94-9e80-97344799673c 7437195 0 2020-05-25 01:15:16 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 0xc0009b6467 0xc0009b6468}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9fde769-ae98-48a3-ad54-36c85d5d553a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 01:15:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.250\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.250,StartTime:2020-05-25 01:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 01:15:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://50330093ef87b12fe094c1480b913c92f5ad96f61446ca93e84fe0c3396f95ff,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.250,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.478: INFO: Pod "webserver-deployment-84855cf797-6sk5b" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-6sk5b webserver-deployment-84855cf797- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-84855cf797-6sk5b fbfbc2e4-bd2c-4720-8622-3436d54281f1 7437187 0 2020-05-25 01:15:16 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 0xc0009b66c7 0xc0009b66c8}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9fde769-ae98-48a3-ad54-36c85d5d553a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 01:15:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.249\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.249,StartTime:2020-05-25 01:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 01:15:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7bdf0aa37840cb6c58a49c474d720e41f88990d1b800d4e8bf772ba7acc84992,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.249,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.478: INFO: Pod "webserver-deployment-84855cf797-bsrhq" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-bsrhq webserver-deployment-84855cf797- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-84855cf797-bsrhq 15c890f7-cf38-4b35-878e-e09bf4189ac9 7437204 0 2020-05-25 01:15:16 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 0xc0009b6917 0xc0009b6918}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9fde769-ae98-48a3-ad54-36c85d5d553a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 01:15:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.251\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.251,StartTime:2020-05-25 01:15:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 01:15:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b6bc2bcca0e71e2d2d687cbbc2299782ee54d16d48e21ab44b8b33d08b0aafcb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.251,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.478: INFO: Pod "webserver-deployment-84855cf797-chqsh" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-chqsh webserver-deployment-84855cf797- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-84855cf797-chqsh 05e57bfb-a22d-4fd1-9a6a-047c3debcace 7437361 0 2020-05-25 01:15:33 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 0xc0009b6ad7 0xc0009b6ad8}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9fde769-ae98-48a3-ad54-36c85d5d553a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.478: INFO: Pod "webserver-deployment-84855cf797-d6wgb" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-d6wgb webserver-deployment-84855cf797- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-84855cf797-d6wgb 34bbb4e9-abf3-4224-909f-a80f70fceeca 7437337 0 2020-05-25 01:15:33 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 0xc0009b6c37 0xc0009b6c38}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9fde769-ae98-48a3-ad54-36c85d5d553a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.478: INFO: Pod "webserver-deployment-84855cf797-ddf57" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-ddf57 webserver-deployment-84855cf797- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-84855cf797-ddf57 10789c29-d7cf-4f6c-934c-2c9f9170d767 7437358 0 2020-05-25 01:15:33 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 0xc0009b6d67 0xc0009b6d68}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9fde769-ae98-48a3-ad54-36c85d5d553a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.479: INFO: Pod "webserver-deployment-84855cf797-dmdtt" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-dmdtt webserver-deployment-84855cf797- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-84855cf797-dmdtt decc09d9-fe0d-4952-ba7c-c2581f04059e 7437186 0 2020-05-25 01:15:16 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 0xc0009b6e97 0xc0009b6e98}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9fde769-ae98-48a3-ad54-36c85d5d553a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 01:15:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.246\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.246,StartTime:2020-05-25 01:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 01:15:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5cb8edf0a92bd39bbec7a53add9facc5bb4b059fd10b74fa62487bab3b248cc2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.246,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.479: INFO: Pod "webserver-deployment-84855cf797-g9qh2" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-g9qh2 webserver-deployment-84855cf797- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-84855cf797-g9qh2 ea10febb-227e-4756-a9d2-d49d2450526f 7437362 0 2020-05-25 01:15:33 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 0xc0009b7807 0xc0009b7808}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9fde769-ae98-48a3-ad54-36c85d5d553a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.479: INFO: Pod "webserver-deployment-84855cf797-jj4pz" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-jj4pz webserver-deployment-84855cf797- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-84855cf797-jj4pz fb99e4c7-02e2-44d4-b615-683f6677cd30 7437225 0 2020-05-25 01:15:16 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 0xc0009b7a77 0xc0009b7a78}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9fde769-ae98-48a3-ad54-36c85d5d553a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 01:15:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.252\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.252,StartTime:2020-05-25 01:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 01:15:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e68919ed0d1c3e30f5f0dd139336af27b849dc0a25704efaefcd53be3440737e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.252,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.479: INFO: Pod "webserver-deployment-84855cf797-m2ltf" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-m2ltf webserver-deployment-84855cf797- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-84855cf797-m2ltf c2e712de-bd59-488a-92b0-2195ee181211 7437322 0 2020-05-25 01:15:32 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 0xc0009b7d27 0xc0009b7d28}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9fde769-ae98-48a3-ad54-36c85d5d553a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.479: INFO: Pod "webserver-deployment-84855cf797-n9br2" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-n9br2 webserver-deployment-84855cf797- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-84855cf797-n9br2 d5e3fb53-cb1d-4223-8e1a-813cb9be46d1 7437359 0 2020-05-25 01:15:33 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 0xc0009b7e57 0xc0009b7e58}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9fde769-ae98-48a3-ad54-36c85d5d553a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.480: INFO: Pod "webserver-deployment-84855cf797-nlk7r" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-nlk7r webserver-deployment-84855cf797- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-84855cf797-nlk7r 5d5d3c10-8ece-4217-ad44-e4300b89b5b7 7437384 0 2020-05-25 01:15:32 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 0xc0009b7fa7 0xc0009b7fa8}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9fde769-ae98-48a3-ad54-36c85d5d553a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-25 01:15:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.480: INFO: Pod "webserver-deployment-84855cf797-r7qph" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-r7qph webserver-deployment-84855cf797- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-84855cf797-r7qph c1c5dd08-0b9a-4464-86e0-105c3ba9a502 7437343 0 2020-05-25 01:15:33 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 0xc002e98357 0xc002e98358}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9fde769-ae98-48a3-ad54-36c85d5d553a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.480: INFO: Pod "webserver-deployment-84855cf797-tbbnd" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-tbbnd webserver-deployment-84855cf797- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-84855cf797-tbbnd c7df3180-6138-4fb1-96a8-7fd48e4952a4 7437366 0 2020-05-25 01:15:32 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 0xc002e98667 0xc002e98668}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9fde769-ae98-48a3-ad54-36c85d5d553a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-25 01:15:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.480: INFO: Pod "webserver-deployment-84855cf797-tpdrm" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-tpdrm webserver-deployment-84855cf797- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-84855cf797-tpdrm 402ee120-ec44-4011-853b-c4523edd9b91 7437218 0 2020-05-25 01:15:16 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 0xc002e98a17 0xc002e98a18}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9fde769-ae98-48a3-ad54-36c85d5d553a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 01:15:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.248\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.248,StartTime:2020-05-25 01:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 01:15:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://380ae97b143d961ac703e987a00e246402a4e94c3d72e8f8b737ac1167cd91d6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.248,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.481: INFO: Pod "webserver-deployment-84855cf797-tzjkx" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-tzjkx webserver-deployment-84855cf797- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-84855cf797-tzjkx eb38ac5e-12d9-47a3-8fc8-029eb50b6fa2 7437206 0 2020-05-25 01:15:16 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 0xc002e98dc7 0xc002e98dc8}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9fde769-ae98-48a3-ad54-36c85d5d553a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 01:15:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.247\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.247,StartTime:2020-05-25 01:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 01:15:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8bde743bb3a20b4b6296923b6a1a7c5aa14e038d27573ed55c92b35bbf2edb2d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.247,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.481: INFO: Pod "webserver-deployment-84855cf797-v5g7l" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-v5g7l webserver-deployment-84855cf797- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-84855cf797-v5g7l 4fcb9845-9166-4386-956b-e79f8cfb84ae 7437342 0 2020-05-25 01:15:33 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 0xc002e98f77 0xc002e98f78}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9fde769-ae98-48a3-ad54-36c85d5d553a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.481: INFO: Pod "webserver-deployment-84855cf797-zs7qs" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-zs7qs webserver-deployment-84855cf797- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-84855cf797-zs7qs af914453-d028-422d-bfe2-375f8946821d 7437162 0 2020-05-25 01:15:16 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 0xc002e991a7 0xc002e991a8}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9fde769-ae98-48a3-ad54-36c85d5d553a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 01:15:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.248\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.248,StartTime:2020-05-25 01:15:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 01:15:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fddba035a79afc36e777fcdef653a51782df13e8b1c2e4d4d425a1d605ca5d79,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.248,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.481: INFO: Pod "webserver-deployment-84855cf797-zw2qs" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-zw2qs webserver-deployment-84855cf797- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-84855cf797-zw2qs eaa4d1dc-e771-4cae-bfe1-fcb0a1ece006 7437338 0 2020-05-25 01:15:33 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 0xc002e993a7 0xc002e993a8}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9fde769-ae98-48a3-ad54-36c85d5d553a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 01:15:33.482: INFO: Pod "webserver-deployment-84855cf797-zxbbm" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-zxbbm webserver-deployment-84855cf797- deployment-3008 /api/v1/namespaces/deployment-3008/pods/webserver-deployment-84855cf797-zxbbm e50940c0-7208-424f-8a91-deee6d5cd795 7437360 0 2020-05-25 01:15:33 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d9fde769-ae98-48a3-ad54-36c85d5d553a 0xc002e996b7 0xc002e996b8}] [] [{kube-controller-manager Update v1 2020-05-25 01:15:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9fde769-ae98-48a3-ad54-36c85d5d553a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vk5xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vk5xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vk5xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 01:15:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:15:33.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3008" for this suite. • [SLOW TEST:17.488 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":288,"completed":287,"skipped":4780,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 01:15:33.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 25 01:15:33.944: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 25 01:15:34.152: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 25 01:15:34.152: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 25 01:15:34.224: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 25 01:15:34.224: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 25 01:15:34.290: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 25 01:15:34.290: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 25 01:15:43.483: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 01:15:43.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-5372" for this suite. • [SLOW TEST:10.327 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":288,"completed":288,"skipped":4789,"failed":0} SSSSSSSSSSSSSSSSSSMay 25 01:15:43.956: INFO: Running AfterSuite actions on all nodes May 25 01:15:43.956: INFO: Running AfterSuite actions on node 1 May 25 01:15:43.956: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":288,"completed":288,"skipped":4807,"failed":0} Ran 288 of 5095 Specs in 5796.193 seconds SUCCESS! -- 288 Passed | 0 Failed | 0 Pending | 4807 Skipped PASS