I0519 23:38:35.047638 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0519 23:38:35.047861 7 e2e.go:129] Starting e2e run "10938b84-e17e-4690-9e0e-0461ca283558" on Ginkgo node 1 {"msg":"Test Suite starting","total":288,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589931513 - Will randomize all specs Will run 288 of 5095 specs May 19 23:38:35.111: INFO: >>> kubeConfig: /root/.kube/config May 19 23:38:35.114: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 19 23:38:35.136: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 19 23:38:35.170: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 19 23:38:35.170: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 19 23:38:35.170: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 19 23:38:35.182: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 19 23:38:35.182: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 19 23:38:35.182: INFO: e2e test version: v1.19.0-alpha.3.35+3416442e4b7eeb May 19 23:38:35.183: INFO: kube-apiserver version: v1.18.2 May 19 23:38:35.183: INFO: >>> kubeConfig: /root/.kube/config May 19 23:38:35.187: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:38:35.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods May 19 23:38:35.275: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:38:35.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8943" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":288,"completed":1,"skipped":22,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:38:35.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 19 23:38:36.035: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 19 23:38:38.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528316, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528316, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528316, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528316, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 23:38:40.286: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528316, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528316, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528316, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528316, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 23:38:43.396: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 23:38:43.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:38:44.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3178" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.250 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":288,"completed":2,"skipped":27,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:38:44.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2251 May 19 23:38:48.749: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2251 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 19 23:38:51.621: INFO: stderr: "I0519 23:38:51.466640 29 log.go:172] (0xc0000e0370) (0xc0007deaa0) Create stream\nI0519 23:38:51.466717 29 log.go:172] (0xc0000e0370) (0xc0007deaa0) Stream added, broadcasting: 1\nI0519 23:38:51.469309 29 log.go:172] (0xc0000e0370) Reply frame received for 1\nI0519 23:38:51.469357 29 log.go:172] (0xc0000e0370) (0xc0006fad20) Create stream\nI0519 23:38:51.469374 29 log.go:172] (0xc0000e0370) (0xc0006fad20) Stream added, broadcasting: 3\nI0519 23:38:51.470335 29 log.go:172] (0xc0000e0370) Reply frame received for 3\nI0519 23:38:51.470363 29 log.go:172] (0xc0000e0370) (0xc0007defa0) Create stream\nI0519 23:38:51.470372 29 log.go:172] (0xc0000e0370) (0xc0007defa0) Stream added, broadcasting: 5\nI0519 23:38:51.471348 29 log.go:172] (0xc0000e0370) Reply frame received for 5\nI0519 23:38:51.571760 29 log.go:172] (0xc0000e0370) Data frame received for 5\nI0519 23:38:51.571790 29 log.go:172] (0xc0007defa0) (5) Data frame handling\nI0519 23:38:51.571813 29 log.go:172] (0xc0007defa0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0519 23:38:51.611275 29 log.go:172] (0xc0000e0370) Data frame received for 3\nI0519 23:38:51.611306 29 log.go:172] (0xc0006fad20) (3) Data frame handling\nI0519 23:38:51.611337 29 log.go:172] (0xc0006fad20) (3) Data frame sent\nI0519 23:38:51.612140 29 log.go:172] (0xc0000e0370) Data frame received for 3\nI0519 23:38:51.612190 29 log.go:172] (0xc0006fad20) (3) Data frame handling\nI0519 23:38:51.612221 29 log.go:172] (0xc0000e0370) Data frame received for 5\nI0519 23:38:51.612240 29 log.go:172] (0xc0007defa0) (5) Data frame handling\nI0519 23:38:51.614542 29 log.go:172] (0xc0000e0370) Data frame received for 1\nI0519 23:38:51.614581 29 log.go:172] (0xc0007deaa0) (1) Data frame handling\nI0519 23:38:51.614625 29 log.go:172] (0xc0007deaa0) (1) Data frame sent\nI0519 23:38:51.614646 29 log.go:172] (0xc0000e0370) (0xc0007deaa0) Stream removed, broadcasting: 1\nI0519 23:38:51.614691 29 log.go:172] (0xc0000e0370) Go away received\nI0519 23:38:51.614982 29 log.go:172] (0xc0000e0370) (0xc0007deaa0) Stream removed, broadcasting: 1\nI0519 23:38:51.615007 29 log.go:172] (0xc0000e0370) (0xc0006fad20) Stream removed, broadcasting: 3\nI0519 23:38:51.615017 29 log.go:172] (0xc0000e0370) (0xc0007defa0) Stream removed, broadcasting: 5\n" May 19 23:38:51.621: INFO: stdout: "iptables" May 19 23:38:51.621: INFO: proxyMode: iptables May 19 23:38:51.625: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 19 23:38:51.635: INFO: Pod kube-proxy-mode-detector still exists May 19 23:38:53.635: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 19 23:38:53.640: INFO: Pod kube-proxy-mode-detector still exists May 19 23:38:55.635: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 19 23:38:55.638: INFO: Pod kube-proxy-mode-detector still exists May 19 23:38:57.635: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 19 23:38:57.640: INFO: Pod kube-proxy-mode-detector still exists May 19 23:38:59.635: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 19 23:38:59.639: INFO: Pod kube-proxy-mode-detector still exists May 19 23:39:01.635: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 19 23:39:01.639: INFO: Pod kube-proxy-mode-detector still exists May 19 23:39:03.635: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 19 23:39:03.640: INFO: Pod kube-proxy-mode-detector still exists May 19 23:39:05.635: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 19 23:39:05.639: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-2251 STEP: creating replication controller affinity-clusterip-timeout in namespace services-2251 I0519 23:39:05.714075 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-2251, replica count: 3 I0519 23:39:08.764466 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 23:39:11.764753 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 23:39:11.771: INFO: Creating new exec pod May 19 23:39:16.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2251 execpod-affinity4dvwb -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 19 23:39:17.052: INFO: stderr: "I0519 23:39:16.931966 61 log.go:172] (0xc000afa9a0) (0xc000524820) Create stream\nI0519 23:39:16.932058 61 log.go:172] (0xc000afa9a0) (0xc000524820) Stream added, broadcasting: 1\nI0519 23:39:16.936319 61 log.go:172] (0xc000afa9a0) Reply frame received for 1\nI0519 23:39:16.936371 61 log.go:172] (0xc000afa9a0) (0xc0001f4000) Create stream\nI0519 23:39:16.936387 61 log.go:172] (0xc000afa9a0) (0xc0001f4000) Stream added, broadcasting: 3\nI0519 23:39:16.937399 61 log.go:172] (0xc000afa9a0) Reply frame received for 3\nI0519 23:39:16.937433 61 log.go:172] (0xc000afa9a0) (0xc0003de460) Create stream\nI0519 23:39:16.937449 61 log.go:172] (0xc000afa9a0) (0xc0003de460) Stream added, broadcasting: 5\nI0519 23:39:16.938187 61 log.go:172] (0xc000afa9a0) Reply frame received for 5\nI0519 23:39:17.033482 61 log.go:172] (0xc000afa9a0) Data frame received for 5\nI0519 23:39:17.033516 61 log.go:172] (0xc0003de460) (5) Data frame handling\nI0519 23:39:17.033537 61 log.go:172] (0xc0003de460) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0519 23:39:17.044119 61 log.go:172] (0xc000afa9a0) Data frame received for 5\nI0519 23:39:17.044149 61 log.go:172] (0xc0003de460) (5) Data frame handling\nI0519 23:39:17.044174 61 log.go:172] (0xc0003de460) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0519 23:39:17.044272 61 log.go:172] (0xc000afa9a0) Data frame received for 5\nI0519 23:39:17.044300 61 log.go:172] (0xc0003de460) (5) Data frame handling\nI0519 23:39:17.044427 61 log.go:172] (0xc000afa9a0) Data frame received for 3\nI0519 23:39:17.044447 61 log.go:172] (0xc0001f4000) (3) Data frame handling\nI0519 23:39:17.046344 61 log.go:172] (0xc000afa9a0) Data frame received for 1\nI0519 23:39:17.046364 61 log.go:172] (0xc000524820) (1) Data frame handling\nI0519 23:39:17.046375 61 log.go:172] (0xc000524820) (1) Data frame sent\nI0519 23:39:17.046479 61 log.go:172] (0xc000afa9a0) (0xc000524820) Stream removed, broadcasting: 1\nI0519 23:39:17.046547 61 log.go:172] (0xc000afa9a0) Go away received\nI0519 23:39:17.046753 61 log.go:172] (0xc000afa9a0) (0xc000524820) Stream removed, broadcasting: 1\nI0519 23:39:17.046765 61 log.go:172] (0xc000afa9a0) (0xc0001f4000) Stream removed, broadcasting: 3\nI0519 23:39:17.046771 61 log.go:172] (0xc000afa9a0) (0xc0003de460) Stream removed, broadcasting: 5\n" May 19 23:39:17.053: INFO: stdout: "" May 19 23:39:17.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2251 execpod-affinity4dvwb -- /bin/sh -x -c nc -zv -t -w 2 10.107.172.1 80' May 19 23:39:17.263: INFO: stderr: "I0519 23:39:17.187777 81 log.go:172] (0xc00003be40) (0xc0003863c0) Create stream\nI0519 23:39:17.187891 81 log.go:172] (0xc00003be40) (0xc0003863c0) Stream added, broadcasting: 1\nI0519 23:39:17.194762 81 log.go:172] (0xc00003be40) Reply frame received for 1\nI0519 23:39:17.194814 81 log.go:172] (0xc00003be40) (0xc0000dde00) Create stream\nI0519 23:39:17.194831 81 log.go:172] (0xc00003be40) (0xc0000dde00) Stream added, broadcasting: 3\nI0519 23:39:17.196192 81 log.go:172] (0xc00003be40) Reply frame received for 3\nI0519 23:39:17.196225 81 log.go:172] (0xc00003be40) (0xc00023e320) Create stream\nI0519 23:39:17.196235 81 log.go:172] (0xc00003be40) (0xc00023e320) Stream added, broadcasting: 5\nI0519 23:39:17.196891 81 log.go:172] (0xc00003be40) Reply frame received for 5\nI0519 23:39:17.258487 81 log.go:172] (0xc00003be40) Data frame received for 3\nI0519 23:39:17.258519 81 log.go:172] (0xc0000dde00) (3) Data frame handling\nI0519 23:39:17.258537 81 log.go:172] (0xc00003be40) Data frame received for 5\nI0519 23:39:17.258545 81 log.go:172] (0xc00023e320) (5) Data frame handling\nI0519 23:39:17.258555 81 log.go:172] (0xc00023e320) (5) Data frame sent\nI0519 23:39:17.258566 81 log.go:172] (0xc00003be40) Data frame received for 5\nI0519 23:39:17.258576 81 log.go:172] (0xc00023e320) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.172.1 80\nConnection to 10.107.172.1 80 port [tcp/http] succeeded!\nI0519 23:39:17.259699 81 log.go:172] (0xc00003be40) Data frame received for 1\nI0519 23:39:17.259720 81 log.go:172] (0xc0003863c0) (1) Data frame handling\nI0519 23:39:17.259741 81 log.go:172] (0xc0003863c0) (1) Data frame sent\nI0519 23:39:17.259759 81 log.go:172] (0xc00003be40) (0xc0003863c0) Stream removed, broadcasting: 1\nI0519 23:39:17.259779 81 log.go:172] (0xc00003be40) Go away received\nI0519 23:39:17.260077 81 log.go:172] (0xc00003be40) (0xc0003863c0) Stream removed, broadcasting: 1\nI0519 23:39:17.260091 81 log.go:172] (0xc00003be40) (0xc0000dde00) Stream removed, broadcasting: 3\nI0519 23:39:17.260101 81 log.go:172] (0xc00003be40) (0xc00023e320) Stream removed, broadcasting: 5\n" May 19 23:39:17.263: INFO: stdout: "" May 19 23:39:17.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2251 execpod-affinity4dvwb -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.107.172.1:80/ ; done' May 19 23:39:17.577: INFO: stderr: "I0519 23:39:17.393323 102 log.go:172] (0xc00095b130) (0xc000ac4140) Create stream\nI0519 23:39:17.393365 102 log.go:172] (0xc00095b130) (0xc000ac4140) Stream added, broadcasting: 1\nI0519 23:39:17.397291 102 log.go:172] (0xc00095b130) Reply frame received for 1\nI0519 23:39:17.397331 102 log.go:172] (0xc00095b130) (0xc000698460) Create stream\nI0519 23:39:17.397341 102 log.go:172] (0xc00095b130) (0xc000698460) Stream added, broadcasting: 3\nI0519 23:39:17.398047 102 log.go:172] (0xc00095b130) Reply frame received for 3\nI0519 23:39:17.398072 102 log.go:172] (0xc00095b130) (0xc00065fa40) Create stream\nI0519 23:39:17.398080 102 log.go:172] (0xc00095b130) (0xc00065fa40) Stream added, broadcasting: 5\nI0519 23:39:17.398743 102 log.go:172] (0xc00095b130) Reply frame received for 5\nI0519 23:39:17.498263 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.498289 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.498302 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.498325 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.498344 102 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0519 23:39:17.498360 102 log.go:172] (0xc00065fa40) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.172.1:80/\nI0519 23:39:17.510079 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.510107 102 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0519 23:39:17.510119 102 log.go:172] (0xc00065fa40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.172.1:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.172.1:80/\nI0519 23:39:17.510134 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.510146 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.510154 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.513007 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.513024 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.513036 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.513482 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.513506 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.513515 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.513529 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.513534 102 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0519 23:39:17.513540 102 log.go:172] (0xc00065fa40) (5) Data frame sent\nI0519 23:39:17.513547 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.513552 102 log.go:172] (0xc00065fa40) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.172.1:80/\nI0519 23:39:17.513567 102 log.go:172] (0xc00065fa40) (5) Data frame sent\nI0519 23:39:17.516285 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.516308 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.516325 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.516587 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.516603 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.516611 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.516624 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.516630 102 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0519 23:39:17.516636 102 log.go:172] (0xc00065fa40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.172.1:80/\nI0519 23:39:17.519982 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.520000 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.520014 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.520244 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.520254 102 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0519 23:39:17.520264 102 log.go:172] (0xc00065fa40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.172.1:80/\nI0519 23:39:17.520286 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.520299 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.520305 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.525451 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.525460 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.525465 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.526037 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.526063 102 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0519 23:39:17.526077 102 log.go:172] (0xc00065fa40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.172.1:80/\nI0519 23:39:17.526100 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.526113 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.526122 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.528641 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.528654 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.528665 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.529407 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.529422 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.529432 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.529445 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.529453 102 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0519 23:39:17.529464 102 log.go:172] (0xc00065fa40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.172.1:80/\nI0519 23:39:17.532744 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.532754 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.532762 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.533730 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.533762 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.533778 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.533796 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.533809 102 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0519 23:39:17.533824 102 log.go:172] (0xc00065fa40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.172.1:80/\nI0519 23:39:17.537583 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.537600 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.537627 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.538104 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.538145 102 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0519 23:39:17.538165 102 log.go:172] (0xc00065fa40) (5) Data frame sent\nI0519 23:39:17.538182 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.538206 102 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0519 23:39:17.538223 102 log.go:172] (0xc00095b130) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.172.1:80/\nI0519 23:39:17.538233 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.538271 102 log.go:172] (0xc00065fa40) (5) Data frame sent\nI0519 23:39:17.538287 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.541779 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.541795 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.541813 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.542224 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.542243 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.542259 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.542278 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.542298 102 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0519 23:39:17.542315 102 log.go:172] (0xc00065fa40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.172.1:80/\nI0519 23:39:17.545675 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.545696 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.545707 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.545922 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.545944 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.545968 102 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0519 23:39:17.545978 102 log.go:172] (0xc00065fa40) (5) Data frame sent\nI0519 23:39:17.545985 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.545992 102 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0519 23:39:17.546067 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.546088 102 log.go:172] (0xc000698460) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.172.1:80/\nI0519 23:39:17.546121 102 log.go:172] (0xc00065fa40) (5) Data frame sent\nI0519 23:39:17.551065 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.551078 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.551086 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.551494 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.551522 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.551531 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.551546 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.551556 102 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0519 23:39:17.551566 102 log.go:172] (0xc00065fa40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.172.1:80/\nI0519 23:39:17.556462 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.556477 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.556494 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.556749 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.556769 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.556778 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.556804 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.556837 102 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0519 23:39:17.556853 102 log.go:172] (0xc00065fa40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0519 23:39:17.556868 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.556876 102 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0519 23:39:17.556892 102 log.go:172] (0xc00065fa40) (5) Data frame sent\n http://10.107.172.1:80/\nI0519 23:39:17.560289 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.560305 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.560318 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.560686 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.560699 102 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0519 23:39:17.560708 102 log.go:172] (0xc00065fa40) (5) Data frame sent\nI0519 23:39:17.560717 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.560725 102 log.go:172] (0xc00065fa40) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.172.1:80/\nI0519 23:39:17.560739 102 log.go:172] (0xc00065fa40) (5) Data frame sent\nI0519 23:39:17.560771 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.560802 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.560846 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.566677 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.566704 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.566729 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.567315 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.567328 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.567336 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.567360 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.567388 102 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0519 23:39:17.567413 102 log.go:172] (0xc00065fa40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.172.1:80/\nI0519 23:39:17.570654 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.570667 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.570680 102 log.go:172] (0xc000698460) (3) Data frame sent\nI0519 23:39:17.571133 102 log.go:172] (0xc00095b130) Data frame received for 5\nI0519 23:39:17.571162 102 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0519 23:39:17.571200 102 log.go:172] (0xc00095b130) Data frame received for 3\nI0519 23:39:17.571213 102 log.go:172] (0xc000698460) (3) Data frame handling\nI0519 23:39:17.572928 102 log.go:172] (0xc00095b130) Data frame received for 1\nI0519 23:39:17.572944 102 log.go:172] (0xc000ac4140) (1) Data frame handling\nI0519 23:39:17.572956 102 log.go:172] (0xc000ac4140) (1) Data frame sent\nI0519 23:39:17.573088 102 log.go:172] (0xc00095b130) (0xc000ac4140) Stream removed, broadcasting: 1\nI0519 23:39:17.573297 102 log.go:172] (0xc00095b130) Go away received\nI0519 23:39:17.573596 102 log.go:172] (0xc00095b130) (0xc000ac4140) Stream removed, broadcasting: 1\nI0519 23:39:17.573612 102 log.go:172] (0xc00095b130) (0xc000698460) Stream removed, broadcasting: 3\nI0519 23:39:17.573620 102 log.go:172] (0xc00095b130) (0xc00065fa40) Stream removed, broadcasting: 5\n" May 19 23:39:17.578: INFO: stdout: "\naffinity-clusterip-timeout-n9r5h\naffinity-clusterip-timeout-n9r5h\naffinity-clusterip-timeout-n9r5h\naffinity-clusterip-timeout-n9r5h\naffinity-clusterip-timeout-n9r5h\naffinity-clusterip-timeout-n9r5h\naffinity-clusterip-timeout-n9r5h\naffinity-clusterip-timeout-n9r5h\naffinity-clusterip-timeout-n9r5h\naffinity-clusterip-timeout-n9r5h\naffinity-clusterip-timeout-n9r5h\naffinity-clusterip-timeout-n9r5h\naffinity-clusterip-timeout-n9r5h\naffinity-clusterip-timeout-n9r5h\naffinity-clusterip-timeout-n9r5h\naffinity-clusterip-timeout-n9r5h" May 19 23:39:17.578: INFO: Received response from host: May 19 23:39:17.578: INFO: Received response from host: affinity-clusterip-timeout-n9r5h May 19 23:39:17.578: INFO: Received response from host: affinity-clusterip-timeout-n9r5h May 19 23:39:17.578: INFO: Received response from host: affinity-clusterip-timeout-n9r5h May 19 23:39:17.578: INFO: Received response from host: affinity-clusterip-timeout-n9r5h May 19 23:39:17.578: INFO: Received response from host: affinity-clusterip-timeout-n9r5h May 19 23:39:17.578: INFO: Received response from host: affinity-clusterip-timeout-n9r5h May 19 23:39:17.578: INFO: Received response from host: affinity-clusterip-timeout-n9r5h May 19 23:39:17.578: INFO: Received response from host: affinity-clusterip-timeout-n9r5h May 19 23:39:17.578: INFO: Received response from host: affinity-clusterip-timeout-n9r5h May 19 23:39:17.578: INFO: Received response from host: affinity-clusterip-timeout-n9r5h May 19 23:39:17.578: INFO: Received response from host: affinity-clusterip-timeout-n9r5h May 19 23:39:17.578: INFO: Received response from host: affinity-clusterip-timeout-n9r5h May 19 23:39:17.578: INFO: Received response from host: affinity-clusterip-timeout-n9r5h May 19 23:39:17.578: INFO: Received response from host: affinity-clusterip-timeout-n9r5h May 19 23:39:17.578: INFO: Received response from host: affinity-clusterip-timeout-n9r5h May 19 23:39:17.578: INFO: Received response from host: affinity-clusterip-timeout-n9r5h May 19 23:39:17.578: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2251 execpod-affinity4dvwb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.107.172.1:80/' May 19 23:39:17.800: INFO: stderr: "I0519 23:39:17.713419 117 log.go:172] (0xc00003a790) (0xc0003af0e0) Create stream\nI0519 23:39:17.713506 117 log.go:172] (0xc00003a790) (0xc0003af0e0) Stream added, broadcasting: 1\nI0519 23:39:17.716240 117 log.go:172] (0xc00003a790) Reply frame received for 1\nI0519 23:39:17.716284 117 log.go:172] (0xc00003a790) (0xc00026a0a0) Create stream\nI0519 23:39:17.716294 117 log.go:172] (0xc00003a790) (0xc00026a0a0) Stream added, broadcasting: 3\nI0519 23:39:17.717441 117 log.go:172] (0xc00003a790) Reply frame received for 3\nI0519 23:39:17.717463 117 log.go:172] (0xc00003a790) (0xc0003e7e00) Create stream\nI0519 23:39:17.717471 117 log.go:172] (0xc00003a790) (0xc0003e7e00) Stream added, broadcasting: 5\nI0519 23:39:17.718480 117 log.go:172] (0xc00003a790) Reply frame received for 5\nI0519 23:39:17.787757 117 log.go:172] (0xc00003a790) Data frame received for 5\nI0519 23:39:17.787789 117 log.go:172] (0xc0003e7e00) (5) Data frame handling\nI0519 23:39:17.787804 117 log.go:172] (0xc0003e7e00) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.107.172.1:80/\nI0519 23:39:17.792413 117 log.go:172] (0xc00003a790) Data frame received for 3\nI0519 23:39:17.792440 117 log.go:172] (0xc00026a0a0) (3) Data frame handling\nI0519 23:39:17.792476 117 log.go:172] (0xc00026a0a0) (3) Data frame sent\nI0519 23:39:17.793038 117 log.go:172] (0xc00003a790) Data frame received for 5\nI0519 23:39:17.793065 117 log.go:172] (0xc0003e7e00) (5) Data frame handling\nI0519 23:39:17.793355 117 log.go:172] (0xc00003a790) Data frame received for 3\nI0519 23:39:17.793377 117 log.go:172] (0xc00026a0a0) (3) Data frame handling\nI0519 23:39:17.794718 117 log.go:172] (0xc00003a790) Data frame received for 1\nI0519 23:39:17.794761 117 log.go:172] (0xc0003af0e0) (1) Data frame handling\nI0519 23:39:17.794784 117 log.go:172] (0xc0003af0e0) (1) Data frame sent\nI0519 23:39:17.794804 117 log.go:172] (0xc00003a790) (0xc0003af0e0) Stream removed, broadcasting: 1\nI0519 23:39:17.794834 117 log.go:172] (0xc00003a790) Go away received\nI0519 23:39:17.795309 117 log.go:172] (0xc00003a790) (0xc0003af0e0) Stream removed, broadcasting: 1\nI0519 23:39:17.795336 117 log.go:172] (0xc00003a790) (0xc00026a0a0) Stream removed, broadcasting: 3\nI0519 23:39:17.795347 117 log.go:172] (0xc00003a790) (0xc0003e7e00) Stream removed, broadcasting: 5\n" May 19 23:39:17.800: INFO: stdout: "affinity-clusterip-timeout-n9r5h" May 19 23:39:32.800: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2251 execpod-affinity4dvwb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.107.172.1:80/' May 19 23:39:33.051: INFO: stderr: "I0519 23:39:32.945226 139 log.go:172] (0xc0000e0370) (0xc000444320) Create stream\nI0519 23:39:32.945286 139 log.go:172] (0xc0000e0370) (0xc000444320) Stream added, broadcasting: 1\nI0519 23:39:32.947586 139 log.go:172] (0xc0000e0370) Reply frame received for 1\nI0519 23:39:32.947640 139 log.go:172] (0xc0000e0370) (0xc0004452c0) Create stream\nI0519 23:39:32.947656 139 log.go:172] (0xc0000e0370) (0xc0004452c0) Stream added, broadcasting: 3\nI0519 23:39:32.948767 139 log.go:172] (0xc0000e0370) Reply frame received for 3\nI0519 23:39:32.948807 139 log.go:172] (0xc0000e0370) (0xc0003bae60) Create stream\nI0519 23:39:32.948816 139 log.go:172] (0xc0000e0370) (0xc0003bae60) Stream added, broadcasting: 5\nI0519 23:39:32.950403 139 log.go:172] (0xc0000e0370) Reply frame received for 5\nI0519 23:39:33.038384 139 log.go:172] (0xc0000e0370) Data frame received for 5\nI0519 23:39:33.038415 139 log.go:172] (0xc0003bae60) (5) Data frame handling\nI0519 23:39:33.038434 139 log.go:172] (0xc0003bae60) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.107.172.1:80/\nI0519 23:39:33.043157 139 log.go:172] (0xc0000e0370) Data frame received for 3\nI0519 23:39:33.043184 139 log.go:172] (0xc0004452c0) (3) Data frame handling\nI0519 23:39:33.043203 139 log.go:172] (0xc0004452c0) (3) Data frame sent\nI0519 23:39:33.043830 139 log.go:172] (0xc0000e0370) Data frame received for 3\nI0519 23:39:33.043854 139 log.go:172] (0xc0004452c0) (3) Data frame handling\nI0519 23:39:33.043902 139 log.go:172] (0xc0000e0370) Data frame received for 5\nI0519 23:39:33.043921 139 log.go:172] (0xc0003bae60) (5) Data frame handling\nI0519 23:39:33.045907 139 log.go:172] (0xc0000e0370) Data frame received for 1\nI0519 23:39:33.045933 139 log.go:172] (0xc000444320) (1) Data frame handling\nI0519 23:39:33.045969 139 log.go:172] (0xc000444320) (1) Data frame sent\nI0519 23:39:33.045993 139 log.go:172] (0xc0000e0370) (0xc000444320) Stream removed, broadcasting: 1\nI0519 23:39:33.046020 139 log.go:172] (0xc0000e0370) Go away received\nI0519 23:39:33.046489 139 log.go:172] (0xc0000e0370) (0xc000444320) Stream removed, broadcasting: 1\nI0519 23:39:33.046532 139 log.go:172] (0xc0000e0370) (0xc0004452c0) Stream removed, broadcasting: 3\nI0519 23:39:33.046546 139 log.go:172] (0xc0000e0370) (0xc0003bae60) Stream removed, broadcasting: 5\n" May 19 23:39:33.051: INFO: stdout: "affinity-clusterip-timeout-ml4l9" May 19 23:39:33.051: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-2251, will wait for the garbage collector to delete the pods May 19 23:39:33.574: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 358.013874ms May 19 23:39:33.874: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 300.20456ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:39:45.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2251" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:60.723 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":3,"skipped":38,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:39:45.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-29lm STEP: Creating a pod to test atomic-volume-subpath May 19 23:39:45.488: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-29lm" in namespace "subpath-3602" to be "Succeeded or Failed" May 19 23:39:45.508: INFO: Pod "pod-subpath-test-downwardapi-29lm": Phase="Pending", Reason="", readiness=false. Elapsed: 19.687416ms May 19 23:39:47.512: INFO: Pod "pod-subpath-test-downwardapi-29lm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023548628s May 19 23:39:49.516: INFO: Pod "pod-subpath-test-downwardapi-29lm": Phase="Running", Reason="", readiness=true. Elapsed: 4.027717636s May 19 23:39:51.520: INFO: Pod "pod-subpath-test-downwardapi-29lm": Phase="Running", Reason="", readiness=true. Elapsed: 6.031461725s May 19 23:39:53.523: INFO: Pod "pod-subpath-test-downwardapi-29lm": Phase="Running", Reason="", readiness=true. Elapsed: 8.035073668s May 19 23:39:55.527: INFO: Pod "pod-subpath-test-downwardapi-29lm": Phase="Running", Reason="", readiness=true. Elapsed: 10.038947173s May 19 23:39:57.531: INFO: Pod "pod-subpath-test-downwardapi-29lm": Phase="Running", Reason="", readiness=true. Elapsed: 12.043004732s May 19 23:39:59.536: INFO: Pod "pod-subpath-test-downwardapi-29lm": Phase="Running", Reason="", readiness=true. Elapsed: 14.047519338s May 19 23:40:01.540: INFO: Pod "pod-subpath-test-downwardapi-29lm": Phase="Running", Reason="", readiness=true. Elapsed: 16.052271723s May 19 23:40:03.559: INFO: Pod "pod-subpath-test-downwardapi-29lm": Phase="Running", Reason="", readiness=true. Elapsed: 18.070497584s May 19 23:40:05.563: INFO: Pod "pod-subpath-test-downwardapi-29lm": Phase="Running", Reason="", readiness=true. Elapsed: 20.074481315s May 19 23:40:07.567: INFO: Pod "pod-subpath-test-downwardapi-29lm": Phase="Running", Reason="", readiness=true. Elapsed: 22.078690883s May 19 23:40:09.572: INFO: Pod "pod-subpath-test-downwardapi-29lm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.083520712s STEP: Saw pod success May 19 23:40:09.572: INFO: Pod "pod-subpath-test-downwardapi-29lm" satisfied condition "Succeeded or Failed" May 19 23:40:09.575: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-29lm container test-container-subpath-downwardapi-29lm: STEP: delete the pod May 19 23:40:09.630: INFO: Waiting for pod pod-subpath-test-downwardapi-29lm to disappear May 19 23:40:09.677: INFO: Pod pod-subpath-test-downwardapi-29lm no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-29lm May 19 23:40:09.677: INFO: Deleting pod "pod-subpath-test-downwardapi-29lm" in namespace "subpath-3602" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:40:09.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3602" for this suite. • [SLOW TEST:24.312 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":288,"completed":4,"skipped":86,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:40:09.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 23:40:09.744: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 19 23:40:12.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6929 create -f -' May 19 23:40:16.055: INFO: stderr: "" May 19 23:40:16.055: INFO: stdout: "e2e-test-crd-publish-openapi-341-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 19 23:40:16.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6929 delete e2e-test-crd-publish-openapi-341-crds test-cr' May 19 23:40:16.164: INFO: stderr: "" May 19 23:40:16.164: INFO: stdout: "e2e-test-crd-publish-openapi-341-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 19 23:40:16.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6929 apply -f -' May 19 23:40:16.425: INFO: stderr: "" May 19 23:40:16.425: INFO: stdout: "e2e-test-crd-publish-openapi-341-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 19 23:40:16.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6929 delete e2e-test-crd-publish-openapi-341-crds test-cr' May 19 23:40:16.539: INFO: stderr: "" May 19 23:40:16.539: INFO: stdout: "e2e-test-crd-publish-openapi-341-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 19 23:40:16.539: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-341-crds' May 19 23:40:16.816: INFO: stderr: "" May 19 23:40:16.816: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-341-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:40:19.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6929" for this suite. • [SLOW TEST:10.115 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":288,"completed":5,"skipped":87,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:40:19.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 23:40:20.392: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 23:40:22.402: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528420, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528420, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528420, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528420, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 23:40:24.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528420, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528420, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528420, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528420, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 23:40:27.428: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:40:27.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8125" for this suite. STEP: Destroying namespace "webhook-8125-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.861 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":288,"completed":6,"skipped":92,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:40:27.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 23:40:28.715: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 23:40:30.725: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528428, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528428, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528428, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528428, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 23:40:32.762: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528428, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528428, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528428, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528428, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 23:40:35.759: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 19 23:40:39.843: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-626 to-be-attached-pod -i -c=container1' May 19 23:40:39.972: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:40:39.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-626" for this suite. STEP: Destroying namespace "webhook-626-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.453 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":288,"completed":7,"skipped":107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:40:40.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium May 19 23:40:40.231: INFO: Waiting up to 5m0s for pod "pod-16e93862-4de4-49f6-bfe4-cd5a33b1c0cd" in namespace "emptydir-1964" to be "Succeeded or Failed" May 19 23:40:40.245: INFO: Pod "pod-16e93862-4de4-49f6-bfe4-cd5a33b1c0cd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.039356ms May 19 23:40:42.319: INFO: Pod "pod-16e93862-4de4-49f6-bfe4-cd5a33b1c0cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088198834s May 19 23:40:44.323: INFO: Pod "pod-16e93862-4de4-49f6-bfe4-cd5a33b1c0cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092117713s STEP: Saw pod success May 19 23:40:44.323: INFO: Pod "pod-16e93862-4de4-49f6-bfe4-cd5a33b1c0cd" satisfied condition "Succeeded or Failed" May 19 23:40:44.326: INFO: Trying to get logs from node latest-worker pod pod-16e93862-4de4-49f6-bfe4-cd5a33b1c0cd container test-container: STEP: delete the pod May 19 23:40:44.366: INFO: Waiting for pod pod-16e93862-4de4-49f6-bfe4-cd5a33b1c0cd to disappear May 19 23:40:44.376: INFO: Pod pod-16e93862-4de4-49f6-bfe4-cd5a33b1c0cd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:40:44.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1964" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":8,"skipped":140,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:40:44.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5605.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5605.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5605.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5605.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5605.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5605.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 23:40:50.702: INFO: DNS probes using dns-5605/dns-test-28090bfe-fc25-48d5-ba8f-82b20e67e523 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:40:50.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5605" for this suite. • [SLOW TEST:6.481 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":288,"completed":9,"skipped":151,"failed":0} SSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:40:50.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 23:40:51.474: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 19 23:40:56.479: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 19 23:40:56.479: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 19 23:40:58.484: INFO: Creating deployment "test-rollover-deployment" May 19 23:40:58.546: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 19 23:41:00.554: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 19 23:41:00.560: INFO: Ensure that both replica sets have 1 created replica May 19 23:41:00.565: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 19 23:41:00.573: INFO: Updating deployment test-rollover-deployment May 19 23:41:00.573: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 19 23:41:02.629: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 19 23:41:02.635: INFO: Make sure deployment "test-rollover-deployment" is complete May 19 23:41:02.642: INFO: all replica sets need to contain the pod-template-hash label May 19 23:41:02.642: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528460, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 23:41:04.651: INFO: all replica sets need to contain the pod-template-hash label May 19 23:41:04.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528460, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 23:41:06.650: INFO: all replica sets need to contain the pod-template-hash label May 19 23:41:06.650: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528465, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 23:41:08.650: INFO: all replica sets need to contain the pod-template-hash label May 19 23:41:08.650: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528465, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 23:41:10.654: INFO: all replica sets need to contain the pod-template-hash label May 19 23:41:10.654: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528465, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 23:41:12.651: INFO: all replica sets need to contain the pod-template-hash label May 19 23:41:12.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528465, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 23:41:14.650: INFO: all replica sets need to contain the pod-template-hash label May 19 23:41:14.650: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528465, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528458, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 23:41:16.664: INFO: May 19 23:41:16.664: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 19 23:41:16.670: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4104 /apis/apps/v1/namespaces/deployment-4104/deployments/test-rollover-deployment fe5cd543-3712-4cb1-a755-9a10a9800142 6073429 2 2020-05-19 23:40:58 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-19 23:41:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-19 23:41:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002bac568 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-19 23:40:58 +0000 UTC,LastTransitionTime:2020-05-19 23:40:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-05-19 23:41:15 +0000 UTC,LastTransitionTime:2020-05-19 23:40:58 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 19 23:41:16.673: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-4104 /apis/apps/v1/namespaces/deployment-4104/replicasets/test-rollover-deployment-7c4fd9c879 612b4395-7ca1-4f4e-a35f-0021f32c472a 6073418 2 2020-05-19 23:41:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment fe5cd543-3712-4cb1-a755-9a10a9800142 0xc001778777 0xc001778778}] [] [{kube-controller-manager Update apps/v1 2020-05-19 23:41:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe5cd543-3712-4cb1-a755-9a10a9800142\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001778848 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 19 23:41:16.673: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 19 23:41:16.673: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4104 /apis/apps/v1/namespaces/deployment-4104/replicasets/test-rollover-controller 9831b17f-17b0-4ce4-a7aa-ffd21b6060d8 6073428 2 2020-05-19 23:40:51 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment fe5cd543-3712-4cb1-a755-9a10a9800142 0xc00177847f 0xc001778490}] [] [{e2e.test Update apps/v1 2020-05-19 23:40:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-19 23:41:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe5cd543-3712-4cb1-a755-9a10a9800142\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001778588 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 19 23:41:16.673: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-4104 /apis/apps/v1/namespaces/deployment-4104/replicasets/test-rollover-deployment-5686c4cfd5 0eb416a2-8fa5-4e94-b4f5-f3807bd329c4 6073367 2 2020-05-19 23:40:58 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment fe5cd543-3712-4cb1-a755-9a10a9800142 0xc001778617 0xc001778618}] [] [{kube-controller-manager Update apps/v1 2020-05-19 23:41:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe5cd543-3712-4cb1-a755-9a10a9800142\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0017786f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 19 23:41:16.675: INFO: Pod "test-rollover-deployment-7c4fd9c879-xzr9p" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-xzr9p test-rollover-deployment-7c4fd9c879- deployment-4104 /api/v1/namespaces/deployment-4104/pods/test-rollover-deployment-7c4fd9c879-xzr9p ec0e26f6-0b63-4e3f-bc05-f6d7ea4498eb 6073386 0 2020-05-19 23:41:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 612b4395-7ca1-4f4e-a35f-0021f32c472a 0xc0017794c7 0xc0017794c8}] [] [{kube-controller-manager Update v1 2020-05-19 23:41:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"612b4395-7ca1-4f4e-a35f-0021f32c472a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 23:41:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.58\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tnjhm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tnjhm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tnjhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 23:41:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 23:41:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 23:41:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 23:41:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.58,StartTime:2020-05-19 23:41:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 23:41:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://495d05914129fa81b7ff90d51758c7c889f3d54eb380af13c890abb07ed4d2e6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.58,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:41:16.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4104" for this suite. • [SLOW TEST:25.817 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":288,"completed":10,"skipped":155,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:41:16.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 19 23:41:24.837: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 23:41:24.870: INFO: Pod pod-with-prestop-exec-hook still exists May 19 23:41:26.870: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 23:41:26.874: INFO: Pod pod-with-prestop-exec-hook still exists May 19 23:41:28.870: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 23:41:28.874: INFO: Pod pod-with-prestop-exec-hook still exists May 19 23:41:30.870: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 23:41:30.874: INFO: Pod pod-with-prestop-exec-hook still exists May 19 23:41:32.870: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 23:41:32.906: INFO: Pod pod-with-prestop-exec-hook still exists May 19 23:41:34.870: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 23:41:34.872: INFO: Pod pod-with-prestop-exec-hook still exists May 19 23:41:36.870: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 23:41:36.882: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:41:36.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4464" for this suite. • [SLOW TEST:20.215 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":288,"completed":11,"skipped":190,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:41:36.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-dd0dd868-4f40-4455-9a08-38c650d122cd STEP: Creating a pod to test consume configMaps May 19 23:41:36.965: INFO: Waiting up to 5m0s for pod "pod-configmaps-80b50e77-4916-42f3-9ad4-74273fe27863" in namespace "configmap-5429" to be "Succeeded or Failed" May 19 23:41:36.970: INFO: Pod "pod-configmaps-80b50e77-4916-42f3-9ad4-74273fe27863": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119882ms May 19 23:41:39.062: INFO: Pod "pod-configmaps-80b50e77-4916-42f3-9ad4-74273fe27863": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096320098s May 19 23:41:41.066: INFO: Pod "pod-configmaps-80b50e77-4916-42f3-9ad4-74273fe27863": Phase="Running", Reason="", readiness=true. Elapsed: 4.100666534s May 19 23:41:43.070: INFO: Pod "pod-configmaps-80b50e77-4916-42f3-9ad4-74273fe27863": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.104831113s STEP: Saw pod success May 19 23:41:43.070: INFO: Pod "pod-configmaps-80b50e77-4916-42f3-9ad4-74273fe27863" satisfied condition "Succeeded or Failed" May 19 23:41:43.074: INFO: Trying to get logs from node latest-worker pod pod-configmaps-80b50e77-4916-42f3-9ad4-74273fe27863 container configmap-volume-test: STEP: delete the pod May 19 23:41:43.249: INFO: Waiting for pod pod-configmaps-80b50e77-4916-42f3-9ad4-74273fe27863 to disappear May 19 23:41:43.264: INFO: Pod pod-configmaps-80b50e77-4916-42f3-9ad4-74273fe27863 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:41:43.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5429" for this suite. • [SLOW TEST:6.386 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":12,"skipped":195,"failed":0} [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:41:43.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-1307/secret-test-c63f60e6-4c45-483c-bc4b-1da3fed5b1e7 STEP: Creating a pod to test consume secrets May 19 23:41:43.387: INFO: Waiting up to 5m0s for pod "pod-configmaps-26cf4bd6-8e6f-4e21-b3b8-5f9dbf618f50" in namespace "secrets-1307" to be "Succeeded or Failed" May 19 23:41:43.405: INFO: Pod "pod-configmaps-26cf4bd6-8e6f-4e21-b3b8-5f9dbf618f50": Phase="Pending", Reason="", readiness=false. Elapsed: 18.634797ms May 19 23:41:45.511: INFO: Pod "pod-configmaps-26cf4bd6-8e6f-4e21-b3b8-5f9dbf618f50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124817387s May 19 23:41:47.514: INFO: Pod "pod-configmaps-26cf4bd6-8e6f-4e21-b3b8-5f9dbf618f50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.127649794s STEP: Saw pod success May 19 23:41:47.514: INFO: Pod "pod-configmaps-26cf4bd6-8e6f-4e21-b3b8-5f9dbf618f50" satisfied condition "Succeeded or Failed" May 19 23:41:47.517: INFO: Trying to get logs from node latest-worker pod pod-configmaps-26cf4bd6-8e6f-4e21-b3b8-5f9dbf618f50 container env-test: STEP: delete the pod May 19 23:41:47.581: INFO: Waiting for pod pod-configmaps-26cf4bd6-8e6f-4e21-b3b8-5f9dbf618f50 to disappear May 19 23:41:47.594: INFO: Pod pod-configmaps-26cf4bd6-8e6f-4e21-b3b8-5f9dbf618f50 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:41:47.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1307" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":13,"skipped":195,"failed":0} ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:41:47.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-428.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-428.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-428.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-428.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-428.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-428.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 23:41:54.060: INFO: DNS probes using dns-428/dns-test-c2ebe11e-c1b7-4c71-9d1d-101802a2c069 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:41:54.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-428" for this suite. • [SLOW TEST:6.551 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":288,"completed":14,"skipped":195,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:41:54.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 19 23:41:54.224: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3975 /api/v1/namespaces/watch-3975/configmaps/e2e-watch-test-configmap-a aa2b0c12-57fe-4da5-9efd-77d5a835032d 6073696 0 2020-05-19 23:41:54 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-19 23:41:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 19 23:41:54.224: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3975 /api/v1/namespaces/watch-3975/configmaps/e2e-watch-test-configmap-a aa2b0c12-57fe-4da5-9efd-77d5a835032d 6073696 0 2020-05-19 23:41:54 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-19 23:41:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 19 23:42:04.232: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3975 /api/v1/namespaces/watch-3975/configmaps/e2e-watch-test-configmap-a aa2b0c12-57fe-4da5-9efd-77d5a835032d 6073743 0 2020-05-19 23:41:54 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-19 23:42:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 19 23:42:04.232: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3975 /api/v1/namespaces/watch-3975/configmaps/e2e-watch-test-configmap-a aa2b0c12-57fe-4da5-9efd-77d5a835032d 6073743 0 2020-05-19 23:41:54 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-19 23:42:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 19 23:42:14.240: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3975 /api/v1/namespaces/watch-3975/configmaps/e2e-watch-test-configmap-a aa2b0c12-57fe-4da5-9efd-77d5a835032d 6073775 0 2020-05-19 23:41:54 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-19 23:42:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 19 23:42:14.241: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3975 /api/v1/namespaces/watch-3975/configmaps/e2e-watch-test-configmap-a aa2b0c12-57fe-4da5-9efd-77d5a835032d 6073775 0 2020-05-19 23:41:54 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-19 23:42:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 19 23:42:24.248: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3975 /api/v1/namespaces/watch-3975/configmaps/e2e-watch-test-configmap-a aa2b0c12-57fe-4da5-9efd-77d5a835032d 6073805 0 2020-05-19 23:41:54 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-19 23:42:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 19 23:42:24.249: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3975 /api/v1/namespaces/watch-3975/configmaps/e2e-watch-test-configmap-a aa2b0c12-57fe-4da5-9efd-77d5a835032d 6073805 0 2020-05-19 23:41:54 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-19 23:42:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 19 23:42:34.257: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3975 /api/v1/namespaces/watch-3975/configmaps/e2e-watch-test-configmap-b c7737725-f16d-4a31-a6e6-850f4101efe6 6073835 0 2020-05-19 23:42:34 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-19 23:42:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 19 23:42:34.257: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3975 /api/v1/namespaces/watch-3975/configmaps/e2e-watch-test-configmap-b c7737725-f16d-4a31-a6e6-850f4101efe6 6073835 0 2020-05-19 23:42:34 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-19 23:42:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 19 23:42:44.266: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3975 /api/v1/namespaces/watch-3975/configmaps/e2e-watch-test-configmap-b c7737725-f16d-4a31-a6e6-850f4101efe6 6073864 0 2020-05-19 23:42:34 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-19 23:42:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 19 23:42:44.266: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3975 /api/v1/namespaces/watch-3975/configmaps/e2e-watch-test-configmap-b c7737725-f16d-4a31-a6e6-850f4101efe6 6073864 0 2020-05-19 23:42:34 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-19 23:42:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:42:54.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3975" for this suite. • [SLOW TEST:60.125 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":288,"completed":15,"skipped":209,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:42:54.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:42:54.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6431" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":288,"completed":16,"skipped":243,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:42:54.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 19 23:42:54.505: INFO: Waiting up to 5m0s for pod "downward-api-af37be1a-e15d-4446-a009-c9b448fd7123" in namespace "downward-api-1256" to be "Succeeded or Failed" May 19 23:42:54.518: INFO: Pod "downward-api-af37be1a-e15d-4446-a009-c9b448fd7123": Phase="Pending", Reason="", readiness=false. Elapsed: 12.902721ms May 19 23:42:56.589: INFO: Pod "downward-api-af37be1a-e15d-4446-a009-c9b448fd7123": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084341188s May 19 23:42:58.594: INFO: Pod "downward-api-af37be1a-e15d-4446-a009-c9b448fd7123": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089321602s STEP: Saw pod success May 19 23:42:58.594: INFO: Pod "downward-api-af37be1a-e15d-4446-a009-c9b448fd7123" satisfied condition "Succeeded or Failed" May 19 23:42:58.598: INFO: Trying to get logs from node latest-worker2 pod downward-api-af37be1a-e15d-4446-a009-c9b448fd7123 container dapi-container: STEP: delete the pod May 19 23:42:58.677: INFO: Waiting for pod downward-api-af37be1a-e15d-4446-a009-c9b448fd7123 to disappear May 19 23:42:58.692: INFO: Pod downward-api-af37be1a-e15d-4446-a009-c9b448fd7123 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:42:58.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1256" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":288,"completed":17,"skipped":254,"failed":0} ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:42:58.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 19 23:43:03.332: INFO: Successfully updated pod "labelsupdate7732a90f-e9f6-4472-890a-db0b5f232407" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:43:07.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1912" for this suite. • [SLOW TEST:8.660 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":18,"skipped":254,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:43:07.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode May 19 23:43:07.477: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4591" to be "Succeeded or Failed" May 19 23:43:07.481: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.797203ms May 19 23:43:09.485: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008381009s May 19 23:43:11.489: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012142625s May 19 23:43:13.493: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016666757s STEP: Saw pod success May 19 23:43:13.494: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 19 23:43:13.496: INFO: Trying to get logs from node latest-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 19 23:43:13.541: INFO: Waiting for pod pod-host-path-test to disappear May 19 23:43:13.601: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:43:13.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4591" for this suite. • [SLOW TEST:6.221 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":19,"skipped":264,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:43:13.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 23:43:14.254: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 23:43:16.267: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528594, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528594, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528594, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528594, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 23:43:19.297: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 23:43:19.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8675-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:43:20.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8624" for this suite. STEP: Destroying namespace "webhook-8624-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.948 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":288,"completed":20,"skipped":282,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:43:20.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-a076a8f1-ace7-4e98-bfc5-9c162a2438d4 STEP: Creating a pod to test consume configMaps May 19 23:43:20.618: INFO: Waiting up to 5m0s for pod "pod-configmaps-5b16a0a4-43c9-4b27-bf74-3ce477167caf" in namespace "configmap-1521" to be "Succeeded or Failed" May 19 23:43:20.624: INFO: Pod "pod-configmaps-5b16a0a4-43c9-4b27-bf74-3ce477167caf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.17034ms May 19 23:43:22.628: INFO: Pod "pod-configmaps-5b16a0a4-43c9-4b27-bf74-3ce477167caf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010103121s May 19 23:43:24.632: INFO: Pod "pod-configmaps-5b16a0a4-43c9-4b27-bf74-3ce477167caf": Phase="Running", Reason="", readiness=true. Elapsed: 4.01406991s May 19 23:43:26.636: INFO: Pod "pod-configmaps-5b16a0a4-43c9-4b27-bf74-3ce477167caf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017974049s STEP: Saw pod success May 19 23:43:26.636: INFO: Pod "pod-configmaps-5b16a0a4-43c9-4b27-bf74-3ce477167caf" satisfied condition "Succeeded or Failed" May 19 23:43:26.638: INFO: Trying to get logs from node latest-worker pod pod-configmaps-5b16a0a4-43c9-4b27-bf74-3ce477167caf container configmap-volume-test: STEP: delete the pod May 19 23:43:26.668: INFO: Waiting for pod pod-configmaps-5b16a0a4-43c9-4b27-bf74-3ce477167caf to disappear May 19 23:43:26.679: INFO: Pod pod-configmaps-5b16a0a4-43c9-4b27-bf74-3ce477167caf no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:43:26.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1521" for this suite. • [SLOW TEST:6.129 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":21,"skipped":341,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:43:26.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 19 23:43:27.602: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 19 23:43:29.643: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528607, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528607, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528607, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528607, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 23:43:32.675: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 23:43:32.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:43:33.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-409" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.389 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":288,"completed":22,"skipped":349,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:43:34.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 23:43:34.841: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 23:43:36.851: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528614, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528614, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528614, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528614, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 23:43:39.888: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:43:52.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3013" for this suite. STEP: Destroying namespace "webhook-3013-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.256 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":288,"completed":23,"skipped":362,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:43:52.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components May 19 23:43:52.752: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 19 23:43:52.752: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1289' May 19 23:43:53.310: INFO: stderr: "" May 19 23:43:53.310: INFO: stdout: "service/agnhost-slave created\n" May 19 23:43:53.310: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 19 23:43:53.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1289' May 19 23:43:53.794: INFO: stderr: "" May 19 23:43:53.794: INFO: stdout: "service/agnhost-master created\n" May 19 23:43:53.795: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 19 23:43:53.795: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1289' May 19 23:43:54.186: INFO: stderr: "" May 19 23:43:54.186: INFO: stdout: "service/frontend created\n" May 19 23:43:54.186: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 19 23:43:54.186: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1289' May 19 23:43:54.448: INFO: stderr: "" May 19 23:43:54.448: INFO: stdout: "deployment.apps/frontend created\n" May 19 23:43:54.448: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 19 23:43:54.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1289' May 19 23:43:54.791: INFO: stderr: "" May 19 23:43:54.791: INFO: stdout: "deployment.apps/agnhost-master created\n" May 19 23:43:54.791: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 19 23:43:54.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1289' May 19 23:43:55.155: INFO: stderr: "" May 19 23:43:55.155: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 19 23:43:55.155: INFO: Waiting for all frontend pods to be Running. May 19 23:44:05.206: INFO: Waiting for frontend to serve content. May 19 23:44:05.217: INFO: Trying to add a new entry to the guestbook. May 19 23:44:05.230: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 19 23:44:05.239: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1289' May 19 23:44:05.402: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 23:44:05.402: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 19 23:44:05.402: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1289' May 19 23:44:05.549: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 23:44:05.549: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 19 23:44:05.549: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1289' May 19 23:44:05.679: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 23:44:05.679: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 19 23:44:05.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1289' May 19 23:44:05.782: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 23:44:05.782: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 19 23:44:05.783: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1289' May 19 23:44:06.246: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 23:44:06.247: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 19 23:44:06.247: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1289' May 19 23:44:06.659: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 23:44:06.659: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:44:06.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1289" for this suite. • [SLOW TEST:14.723 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":288,"completed":24,"skipped":366,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:44:07.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 23:44:08.092: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f832945-f99f-4f2a-9557-8e66490b5eaa" in namespace "downward-api-2149" to be "Succeeded or Failed" May 19 23:44:08.495: INFO: Pod "downwardapi-volume-7f832945-f99f-4f2a-9557-8e66490b5eaa": Phase="Pending", Reason="", readiness=false. Elapsed: 402.361947ms May 19 23:44:10.674: INFO: Pod "downwardapi-volume-7f832945-f99f-4f2a-9557-8e66490b5eaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.581544725s May 19 23:44:12.781: INFO: Pod "downwardapi-volume-7f832945-f99f-4f2a-9557-8e66490b5eaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.688759906s STEP: Saw pod success May 19 23:44:12.781: INFO: Pod "downwardapi-volume-7f832945-f99f-4f2a-9557-8e66490b5eaa" satisfied condition "Succeeded or Failed" May 19 23:44:12.784: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-7f832945-f99f-4f2a-9557-8e66490b5eaa container client-container: STEP: delete the pod May 19 23:44:12.903: INFO: Waiting for pod downwardapi-volume-7f832945-f99f-4f2a-9557-8e66490b5eaa to disappear May 19 23:44:12.950: INFO: Pod downwardapi-volume-7f832945-f99f-4f2a-9557-8e66490b5eaa no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:44:12.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2149" for this suite. • [SLOW TEST:5.914 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":25,"skipped":368,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:44:12.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args May 19 23:44:13.049: INFO: Waiting up to 5m0s for pod "var-expansion-7084be01-2825-4938-a5ec-f6211076da94" in namespace "var-expansion-3713" to be "Succeeded or Failed" May 19 23:44:13.106: INFO: Pod "var-expansion-7084be01-2825-4938-a5ec-f6211076da94": Phase="Pending", Reason="", readiness=false. Elapsed: 57.040526ms May 19 23:44:15.200: INFO: Pod "var-expansion-7084be01-2825-4938-a5ec-f6211076da94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150702688s May 19 23:44:17.205: INFO: Pod "var-expansion-7084be01-2825-4938-a5ec-f6211076da94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.155663608s STEP: Saw pod success May 19 23:44:17.205: INFO: Pod "var-expansion-7084be01-2825-4938-a5ec-f6211076da94" satisfied condition "Succeeded or Failed" May 19 23:44:17.208: INFO: Trying to get logs from node latest-worker pod var-expansion-7084be01-2825-4938-a5ec-f6211076da94 container dapi-container: STEP: delete the pod May 19 23:44:17.251: INFO: Waiting for pod var-expansion-7084be01-2825-4938-a5ec-f6211076da94 to disappear May 19 23:44:17.264: INFO: Pod var-expansion-7084be01-2825-4938-a5ec-f6211076da94 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:44:17.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3713" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":288,"completed":26,"skipped":383,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:44:17.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 19 23:44:17.373: INFO: Waiting up to 5m0s for pod "pod-7c16759c-808f-442d-9274-81d1bd8b6919" in namespace "emptydir-4672" to be "Succeeded or Failed" May 19 23:44:17.385: INFO: Pod "pod-7c16759c-808f-442d-9274-81d1bd8b6919": Phase="Pending", Reason="", readiness=false. Elapsed: 12.009965ms May 19 23:44:19.388: INFO: Pod "pod-7c16759c-808f-442d-9274-81d1bd8b6919": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015413925s May 19 23:44:21.393: INFO: Pod "pod-7c16759c-808f-442d-9274-81d1bd8b6919": Phase="Running", Reason="", readiness=true. Elapsed: 4.020456823s May 19 23:44:23.398: INFO: Pod "pod-7c16759c-808f-442d-9274-81d1bd8b6919": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024684255s STEP: Saw pod success May 19 23:44:23.398: INFO: Pod "pod-7c16759c-808f-442d-9274-81d1bd8b6919" satisfied condition "Succeeded or Failed" May 19 23:44:23.400: INFO: Trying to get logs from node latest-worker pod pod-7c16759c-808f-442d-9274-81d1bd8b6919 container test-container: STEP: delete the pod May 19 23:44:23.436: INFO: Waiting for pod pod-7c16759c-808f-442d-9274-81d1bd8b6919 to disappear May 19 23:44:23.469: INFO: Pod pod-7c16759c-808f-442d-9274-81d1bd8b6919 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:44:23.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4672" for this suite. • [SLOW TEST:6.210 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":27,"skipped":391,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:44:23.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-db231ba7-6831-40d4-ae2d-b7e41788a23f May 19 23:44:23.564: INFO: Pod name my-hostname-basic-db231ba7-6831-40d4-ae2d-b7e41788a23f: Found 0 pods out of 1 May 19 23:44:28.570: INFO: Pod name my-hostname-basic-db231ba7-6831-40d4-ae2d-b7e41788a23f: Found 1 pods out of 1 May 19 23:44:28.570: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-db231ba7-6831-40d4-ae2d-b7e41788a23f" are running May 19 23:44:28.590: INFO: Pod "my-hostname-basic-db231ba7-6831-40d4-ae2d-b7e41788a23f-4wnzz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 23:44:23 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 23:44:26 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 23:44:26 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 23:44:23 +0000 UTC Reason: Message:}]) May 19 23:44:28.591: INFO: Trying to dial the pod May 19 23:44:33.602: INFO: Controller my-hostname-basic-db231ba7-6831-40d4-ae2d-b7e41788a23f: Got expected result from replica 1 [my-hostname-basic-db231ba7-6831-40d4-ae2d-b7e41788a23f-4wnzz]: "my-hostname-basic-db231ba7-6831-40d4-ae2d-b7e41788a23f-4wnzz", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:44:33.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4791" for this suite. • [SLOW TEST:10.128 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":28,"skipped":400,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:44:33.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1433.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1433.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1433.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1433.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1433.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1433.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1433.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1433.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1433.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1433.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1433.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 50.238.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.238.50_udp@PTR;check="$$(dig +tcp +noall +answer +search 50.238.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.238.50_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1433.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1433.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1433.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1433.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1433.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1433.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1433.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1433.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1433.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1433.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1433.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 50.238.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.238.50_udp@PTR;check="$$(dig +tcp +noall +answer +search 50.238.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.238.50_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 23:44:39.892: INFO: Unable to read wheezy_udp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:39.895: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:39.897: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:39.924: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:39.978: INFO: Unable to read jessie_udp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:39.981: INFO: Unable to read jessie_tcp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:39.985: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:39.988: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:40.004: INFO: Lookups using dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62 failed for: [wheezy_udp@dns-test-service.dns-1433.svc.cluster.local wheezy_tcp@dns-test-service.dns-1433.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local jessie_udp@dns-test-service.dns-1433.svc.cluster.local jessie_tcp@dns-test-service.dns-1433.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local] May 19 23:44:45.009: INFO: Unable to read wheezy_udp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:45.013: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:45.016: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:45.019: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:45.040: INFO: Unable to read jessie_udp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:45.043: INFO: Unable to read jessie_tcp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:45.045: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:45.048: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:45.065: INFO: Lookups using dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62 failed for: [wheezy_udp@dns-test-service.dns-1433.svc.cluster.local wheezy_tcp@dns-test-service.dns-1433.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local jessie_udp@dns-test-service.dns-1433.svc.cluster.local jessie_tcp@dns-test-service.dns-1433.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local] May 19 23:44:50.051: INFO: Unable to read wheezy_udp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:50.055: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:50.057: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:50.060: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:50.081: INFO: Unable to read jessie_udp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:50.084: INFO: Unable to read jessie_tcp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:50.087: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:50.090: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:50.109: INFO: Lookups using dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62 failed for: [wheezy_udp@dns-test-service.dns-1433.svc.cluster.local wheezy_tcp@dns-test-service.dns-1433.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local jessie_udp@dns-test-service.dns-1433.svc.cluster.local jessie_tcp@dns-test-service.dns-1433.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local] May 19 23:44:55.009: INFO: Unable to read wheezy_udp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:55.012: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:55.015: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:55.018: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:55.035: INFO: Unable to read jessie_udp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:55.037: INFO: Unable to read jessie_tcp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:55.040: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:55.042: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:44:55.057: INFO: Lookups using dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62 failed for: [wheezy_udp@dns-test-service.dns-1433.svc.cluster.local wheezy_tcp@dns-test-service.dns-1433.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local jessie_udp@dns-test-service.dns-1433.svc.cluster.local jessie_tcp@dns-test-service.dns-1433.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local] May 19 23:45:00.010: INFO: Unable to read wheezy_udp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:45:00.015: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:45:00.051: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:45:00.059: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:45:00.080: INFO: Unable to read jessie_udp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:45:00.084: INFO: Unable to read jessie_tcp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:45:00.087: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:45:00.090: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:45:00.106: INFO: Lookups using dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62 failed for: [wheezy_udp@dns-test-service.dns-1433.svc.cluster.local wheezy_tcp@dns-test-service.dns-1433.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local jessie_udp@dns-test-service.dns-1433.svc.cluster.local jessie_tcp@dns-test-service.dns-1433.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local] May 19 23:45:05.009: INFO: Unable to read wheezy_udp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:45:05.012: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:45:05.014: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:45:05.017: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:45:05.078: INFO: Unable to read jessie_udp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:45:05.080: INFO: Unable to read jessie_tcp@dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:45:05.082: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:45:05.084: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local from pod dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62: the server could not find the requested resource (get pods dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62) May 19 23:45:05.098: INFO: Lookups using dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62 failed for: [wheezy_udp@dns-test-service.dns-1433.svc.cluster.local wheezy_tcp@dns-test-service.dns-1433.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local jessie_udp@dns-test-service.dns-1433.svc.cluster.local jessie_tcp@dns-test-service.dns-1433.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1433.svc.cluster.local] May 19 23:45:10.088: INFO: DNS probes using dns-1433/dns-test-0a1fe3a7-c2a3-4dbd-ab81-736dd0d88e62 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:45:10.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1433" for this suite. • [SLOW TEST:37.339 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":288,"completed":29,"skipped":431,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:45:10.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0519 23:45:12.117002 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 23:45:12.117: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:45:12.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3309" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":288,"completed":30,"skipped":479,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:45:12.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath May 19 23:45:12.364: INFO: Waiting up to 5m0s for pod "var-expansion-f75ebfc2-ccf1-4cc0-a477-0a84f57dd751" in namespace "var-expansion-5715" to be "Succeeded or Failed" May 19 23:45:12.384: INFO: Pod "var-expansion-f75ebfc2-ccf1-4cc0-a477-0a84f57dd751": Phase="Pending", Reason="", readiness=false. Elapsed: 20.537901ms May 19 23:45:14.506: INFO: Pod "var-expansion-f75ebfc2-ccf1-4cc0-a477-0a84f57dd751": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142153939s May 19 23:45:16.710: INFO: Pod "var-expansion-f75ebfc2-ccf1-4cc0-a477-0a84f57dd751": Phase="Pending", Reason="", readiness=false. Elapsed: 4.346301042s May 19 23:45:18.750: INFO: Pod "var-expansion-f75ebfc2-ccf1-4cc0-a477-0a84f57dd751": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.386314898s STEP: Saw pod success May 19 23:45:18.750: INFO: Pod "var-expansion-f75ebfc2-ccf1-4cc0-a477-0a84f57dd751" satisfied condition "Succeeded or Failed" May 19 23:45:18.754: INFO: Trying to get logs from node latest-worker2 pod var-expansion-f75ebfc2-ccf1-4cc0-a477-0a84f57dd751 container dapi-container: STEP: delete the pod May 19 23:45:19.088: INFO: Waiting for pod var-expansion-f75ebfc2-ccf1-4cc0-a477-0a84f57dd751 to disappear May 19 23:45:19.092: INFO: Pod var-expansion-f75ebfc2-ccf1-4cc0-a477-0a84f57dd751 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:45:19.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5715" for this suite. • [SLOW TEST:6.977 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":288,"completed":31,"skipped":482,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:45:19.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 19 23:45:27.742: INFO: 10 pods remaining May 19 23:45:27.742: INFO: 7 pods has nil DeletionTimestamp May 19 23:45:27.742: INFO: May 19 23:45:29.187: INFO: 0 pods remaining May 19 23:45:29.187: INFO: 0 pods has nil DeletionTimestamp May 19 23:45:29.187: INFO: May 19 23:45:30.549: INFO: 0 pods remaining May 19 23:45:30.549: INFO: 0 pods has nil DeletionTimestamp May 19 23:45:30.549: INFO: STEP: Gathering metrics W0519 23:45:31.141339 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 23:45:31.141: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:45:31.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7198" for this suite. • [SLOW TEST:12.047 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":288,"completed":32,"skipped":516,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:45:31.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 19 23:45:31.650: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:45:48.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6888" for this suite. • [SLOW TEST:17.677 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":288,"completed":33,"skipped":517,"failed":0} [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:45:48.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 23:45:48.917: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1699' May 19 23:45:49.337: INFO: stderr: "" May 19 23:45:49.337: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 19 23:45:49.337: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1699' May 19 23:45:49.641: INFO: stderr: "" May 19 23:45:49.641: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 19 23:45:50.647: INFO: Selector matched 1 pods for map[app:agnhost] May 19 23:45:50.647: INFO: Found 0 / 1 May 19 23:45:51.646: INFO: Selector matched 1 pods for map[app:agnhost] May 19 23:45:51.647: INFO: Found 0 / 1 May 19 23:45:52.647: INFO: Selector matched 1 pods for map[app:agnhost] May 19 23:45:52.647: INFO: Found 0 / 1 May 19 23:45:53.646: INFO: Selector matched 1 pods for map[app:agnhost] May 19 23:45:53.646: INFO: Found 1 / 1 May 19 23:45:53.646: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 19 23:45:53.650: INFO: Selector matched 1 pods for map[app:agnhost] May 19 23:45:53.650: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 19 23:45:53.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-r6jt2 --namespace=kubectl-1699' May 19 23:45:53.775: INFO: stderr: "" May 19 23:45:53.775: INFO: stdout: "Name: agnhost-master-r6jt2\nNamespace: kubectl-1699\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Tue, 19 May 2020 23:45:49 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.78\nIPs:\n IP: 10.244.1.78\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://081a683feb50a2f2c14c19b437a94fc98c602cc80568877e322cc2801c626a51\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 19 May 2020 23:45:51 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-hnmfl (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-hnmfl:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-hnmfl\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-1699/agnhost-master-r6jt2 to latest-worker\n Normal Pulled 3s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 2s kubelet, latest-worker Created container agnhost-master\n Normal Started 2s kubelet, latest-worker Started container agnhost-master\n" May 19 23:45:53.775: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1699' May 19 23:45:53.969: INFO: stderr: "" May 19 23:45:53.969: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1699\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-r6jt2\n" May 19 23:45:53.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1699' May 19 23:45:54.136: INFO: stderr: "" May 19 23:45:54.136: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1699\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.102.48.249\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.78:6379\nSession Affinity: None\nEvents: \n" May 19 23:45:54.140: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' May 19 23:45:54.282: INFO: stderr: "" May 19 23:45:54.282: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Tue, 19 May 2020 23:45:50 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 19 May 2020 23:44:14 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 19 May 2020 23:44:14 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 19 May 2020 23:44:14 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 19 May 2020 23:44:14 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 20d\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 20d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20d\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 20d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 20d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 20d\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 20d\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 19 23:45:54.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-1699' May 19 23:45:54.397: INFO: stderr: "" May 19 23:45:54.397: INFO: stdout: "Name: kubectl-1699\nLabels: e2e-framework=kubectl\n e2e-run=10938b84-e17e-4690-9e0e-0461ca283558\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:45:54.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1699" for this suite. • [SLOW TEST:5.582 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1083 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":288,"completed":34,"skipped":517,"failed":0} [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:45:54.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:45:54.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9416" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":288,"completed":35,"skipped":517,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:45:54.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-c8ab704e-0380-498c-b1bd-8038f93e104e STEP: Creating a pod to test consume configMaps May 19 23:45:54.722: INFO: Waiting up to 5m0s for pod "pod-configmaps-d5acc474-e7bc-4086-89bb-de12527effdd" in namespace "configmap-8181" to be "Succeeded or Failed" May 19 23:45:54.757: INFO: Pod "pod-configmaps-d5acc474-e7bc-4086-89bb-de12527effdd": Phase="Pending", Reason="", readiness=false. Elapsed: 35.32013ms May 19 23:45:56.762: INFO: Pod "pod-configmaps-d5acc474-e7bc-4086-89bb-de12527effdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040172843s May 19 23:45:58.766: INFO: Pod "pod-configmaps-d5acc474-e7bc-4086-89bb-de12527effdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044568751s STEP: Saw pod success May 19 23:45:58.766: INFO: Pod "pod-configmaps-d5acc474-e7bc-4086-89bb-de12527effdd" satisfied condition "Succeeded or Failed" May 19 23:45:58.769: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-d5acc474-e7bc-4086-89bb-de12527effdd container configmap-volume-test: STEP: delete the pod May 19 23:45:58.880: INFO: Waiting for pod pod-configmaps-d5acc474-e7bc-4086-89bb-de12527effdd to disappear May 19 23:45:58.991: INFO: Pod pod-configmaps-d5acc474-e7bc-4086-89bb-de12527effdd no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:45:58.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8181" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":36,"skipped":520,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:45:59.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command May 19 23:45:59.078: INFO: Waiting up to 5m0s for pod "client-containers-481344a1-68a7-4dd2-a1f3-09f6133b4f3d" in namespace "containers-5579" to be "Succeeded or Failed" May 19 23:45:59.083: INFO: Pod "client-containers-481344a1-68a7-4dd2-a1f3-09f6133b4f3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.451219ms May 19 23:46:01.087: INFO: Pod "client-containers-481344a1-68a7-4dd2-a1f3-09f6133b4f3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009024014s May 19 23:46:03.092: INFO: Pod "client-containers-481344a1-68a7-4dd2-a1f3-09f6133b4f3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013624201s STEP: Saw pod success May 19 23:46:03.092: INFO: Pod "client-containers-481344a1-68a7-4dd2-a1f3-09f6133b4f3d" satisfied condition "Succeeded or Failed" May 19 23:46:03.096: INFO: Trying to get logs from node latest-worker2 pod client-containers-481344a1-68a7-4dd2-a1f3-09f6133b4f3d container test-container: STEP: delete the pod May 19 23:46:03.176: INFO: Waiting for pod client-containers-481344a1-68a7-4dd2-a1f3-09f6133b4f3d to disappear May 19 23:46:03.204: INFO: Pod client-containers-481344a1-68a7-4dd2-a1f3-09f6133b4f3d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:46:03.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5579" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":288,"completed":37,"skipped":525,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:46:03.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 23:46:03.423: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:46:04.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6143" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":288,"completed":38,"skipped":535,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:46:04.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-e26038ba-9010-4fb5-958e-93105d33fafa STEP: Creating a pod to test consume configMaps May 19 23:46:04.643: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-87454758-1e8d-4d99-91cf-95b9cff87ce0" in namespace "projected-1671" to be "Succeeded or Failed" May 19 23:46:04.668: INFO: Pod "pod-projected-configmaps-87454758-1e8d-4d99-91cf-95b9cff87ce0": Phase="Pending", Reason="", readiness=false. Elapsed: 25.064177ms May 19 23:46:06.673: INFO: Pod "pod-projected-configmaps-87454758-1e8d-4d99-91cf-95b9cff87ce0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029101223s May 19 23:46:08.677: INFO: Pod "pod-projected-configmaps-87454758-1e8d-4d99-91cf-95b9cff87ce0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033143616s STEP: Saw pod success May 19 23:46:08.677: INFO: Pod "pod-projected-configmaps-87454758-1e8d-4d99-91cf-95b9cff87ce0" satisfied condition "Succeeded or Failed" May 19 23:46:08.679: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-87454758-1e8d-4d99-91cf-95b9cff87ce0 container projected-configmap-volume-test: STEP: delete the pod May 19 23:46:08.707: INFO: Waiting for pod pod-projected-configmaps-87454758-1e8d-4d99-91cf-95b9cff87ce0 to disappear May 19 23:46:08.711: INFO: Pod pod-projected-configmaps-87454758-1e8d-4d99-91cf-95b9cff87ce0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:46:08.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1671" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":39,"skipped":541,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:46:08.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 19 23:46:19.110: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8428 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 23:46:19.110: INFO: >>> kubeConfig: /root/.kube/config I0519 23:46:19.143469 7 log.go:172] (0xc00216ebb0) (0xc002b13180) Create stream I0519 23:46:19.143516 7 log.go:172] (0xc00216ebb0) (0xc002b13180) Stream added, broadcasting: 1 I0519 23:46:19.146822 7 log.go:172] (0xc00216ebb0) Reply frame received for 1 I0519 23:46:19.146874 7 log.go:172] (0xc00216ebb0) (0xc002b13220) Create stream I0519 23:46:19.146889 7 log.go:172] (0xc00216ebb0) (0xc002b13220) Stream added, broadcasting: 3 I0519 23:46:19.147742 7 log.go:172] (0xc00216ebb0) Reply frame received for 3 I0519 23:46:19.147809 7 log.go:172] (0xc00216ebb0) (0xc002cb9680) Create stream I0519 23:46:19.147838 7 log.go:172] (0xc00216ebb0) (0xc002cb9680) Stream added, broadcasting: 5 I0519 23:46:19.148774 7 log.go:172] (0xc00216ebb0) Reply frame received for 5 I0519 23:46:19.216302 7 log.go:172] (0xc00216ebb0) Data frame received for 3 I0519 23:46:19.216328 7 log.go:172] (0xc002b13220) (3) Data frame handling I0519 23:46:19.216336 7 log.go:172] (0xc002b13220) (3) Data frame sent I0519 23:46:19.216341 7 log.go:172] (0xc00216ebb0) Data frame received for 3 I0519 23:46:19.216346 7 log.go:172] (0xc002b13220) (3) Data frame handling I0519 23:46:19.216367 7 log.go:172] (0xc00216ebb0) Data frame received for 5 I0519 23:46:19.216375 7 log.go:172] (0xc002cb9680) (5) Data frame handling I0519 23:46:19.217698 7 log.go:172] (0xc00216ebb0) Data frame received for 1 I0519 23:46:19.217755 7 log.go:172] (0xc002b13180) (1) Data frame handling I0519 23:46:19.217822 7 log.go:172] (0xc002b13180) (1) Data frame sent I0519 23:46:19.217855 7 log.go:172] (0xc00216ebb0) (0xc002b13180) Stream removed, broadcasting: 1 I0519 23:46:19.217955 7 log.go:172] (0xc00216ebb0) Go away received I0519 23:46:19.218213 7 log.go:172] (0xc00216ebb0) (0xc002b13180) Stream removed, broadcasting: 1 I0519 23:46:19.218227 7 log.go:172] (0xc00216ebb0) (0xc002b13220) Stream removed, broadcasting: 3 I0519 23:46:19.218237 7 log.go:172] (0xc00216ebb0) (0xc002cb9680) Stream removed, broadcasting: 5 May 19 23:46:19.218: INFO: Exec stderr: "" May 19 23:46:19.218: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8428 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 23:46:19.218: INFO: >>> kubeConfig: /root/.kube/config I0519 23:46:19.245493 7 log.go:172] (0xc00225cbb0) (0xc002a36a00) Create stream I0519 23:46:19.245516 7 log.go:172] (0xc00225cbb0) (0xc002a36a00) Stream added, broadcasting: 1 I0519 23:46:19.247858 7 log.go:172] (0xc00225cbb0) Reply frame received for 1 I0519 23:46:19.247903 7 log.go:172] (0xc00225cbb0) (0xc002cb9720) Create stream I0519 23:46:19.247923 7 log.go:172] (0xc00225cbb0) (0xc002cb9720) Stream added, broadcasting: 3 I0519 23:46:19.248724 7 log.go:172] (0xc00225cbb0) Reply frame received for 3 I0519 23:46:19.248752 7 log.go:172] (0xc00225cbb0) (0xc002a36b40) Create stream I0519 23:46:19.248761 7 log.go:172] (0xc00225cbb0) (0xc002a36b40) Stream added, broadcasting: 5 I0519 23:46:19.249660 7 log.go:172] (0xc00225cbb0) Reply frame received for 5 I0519 23:46:19.311635 7 log.go:172] (0xc00225cbb0) Data frame received for 3 I0519 23:46:19.311671 7 log.go:172] (0xc002cb9720) (3) Data frame handling I0519 23:46:19.311683 7 log.go:172] (0xc002cb9720) (3) Data frame sent I0519 23:46:19.311697 7 log.go:172] (0xc00225cbb0) Data frame received for 3 I0519 23:46:19.311708 7 log.go:172] (0xc002cb9720) (3) Data frame handling I0519 23:46:19.311731 7 log.go:172] (0xc00225cbb0) Data frame received for 5 I0519 23:46:19.311741 7 log.go:172] (0xc002a36b40) (5) Data frame handling I0519 23:46:19.313507 7 log.go:172] (0xc00225cbb0) Data frame received for 1 I0519 23:46:19.313522 7 log.go:172] (0xc002a36a00) (1) Data frame handling I0519 23:46:19.313573 7 log.go:172] (0xc002a36a00) (1) Data frame sent I0519 23:46:19.313588 7 log.go:172] (0xc00225cbb0) (0xc002a36a00) Stream removed, broadcasting: 1 I0519 23:46:19.313598 7 log.go:172] (0xc00225cbb0) Go away received I0519 23:46:19.313710 7 log.go:172] (0xc00225cbb0) (0xc002a36a00) Stream removed, broadcasting: 1 I0519 23:46:19.313740 7 log.go:172] (0xc00225cbb0) (0xc002cb9720) Stream removed, broadcasting: 3 I0519 23:46:19.313758 7 log.go:172] (0xc00225cbb0) (0xc002a36b40) Stream removed, broadcasting: 5 May 19 23:46:19.313: INFO: Exec stderr: "" May 19 23:46:19.313: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8428 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 23:46:19.313: INFO: >>> kubeConfig: /root/.kube/config I0519 23:46:19.347527 7 log.go:172] (0xc00216f1e0) (0xc002b13400) Create stream I0519 23:46:19.347572 7 log.go:172] (0xc00216f1e0) (0xc002b13400) Stream added, broadcasting: 1 I0519 23:46:19.351224 7 log.go:172] (0xc00216f1e0) Reply frame received for 1 I0519 23:46:19.351267 7 log.go:172] (0xc00216f1e0) (0xc002cb97c0) Create stream I0519 23:46:19.351289 7 log.go:172] (0xc00216f1e0) (0xc002cb97c0) Stream added, broadcasting: 3 I0519 23:46:19.352233 7 log.go:172] (0xc00216f1e0) Reply frame received for 3 I0519 23:46:19.352269 7 log.go:172] (0xc00216f1e0) (0xc002cb9860) Create stream I0519 23:46:19.352285 7 log.go:172] (0xc00216f1e0) (0xc002cb9860) Stream added, broadcasting: 5 I0519 23:46:19.353475 7 log.go:172] (0xc00216f1e0) Reply frame received for 5 I0519 23:46:19.430889 7 log.go:172] (0xc00216f1e0) Data frame received for 5 I0519 23:46:19.430938 7 log.go:172] (0xc002cb9860) (5) Data frame handling I0519 23:46:19.430965 7 log.go:172] (0xc00216f1e0) Data frame received for 3 I0519 23:46:19.430991 7 log.go:172] (0xc002cb97c0) (3) Data frame handling I0519 23:46:19.431022 7 log.go:172] (0xc002cb97c0) (3) Data frame sent I0519 23:46:19.431037 7 log.go:172] (0xc00216f1e0) Data frame received for 3 I0519 23:46:19.431057 7 log.go:172] (0xc002cb97c0) (3) Data frame handling I0519 23:46:19.432339 7 log.go:172] (0xc00216f1e0) Data frame received for 1 I0519 23:46:19.432354 7 log.go:172] (0xc002b13400) (1) Data frame handling I0519 23:46:19.432364 7 log.go:172] (0xc002b13400) (1) Data frame sent I0519 23:46:19.432372 7 log.go:172] (0xc00216f1e0) (0xc002b13400) Stream removed, broadcasting: 1 I0519 23:46:19.432385 7 log.go:172] (0xc00216f1e0) Go away received I0519 23:46:19.432477 7 log.go:172] (0xc00216f1e0) (0xc002b13400) Stream removed, broadcasting: 1 I0519 23:46:19.432536 7 log.go:172] (0xc00216f1e0) (0xc002cb97c0) Stream removed, broadcasting: 3 I0519 23:46:19.432561 7 log.go:172] (0xc00216f1e0) (0xc002cb9860) Stream removed, broadcasting: 5 May 19 23:46:19.432: INFO: Exec stderr: "" May 19 23:46:19.432: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8428 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 23:46:19.432: INFO: >>> kubeConfig: /root/.kube/config I0519 23:46:19.463944 7 log.go:172] (0xc00225d1e0) (0xc002a36d20) Create stream I0519 23:46:19.463981 7 log.go:172] (0xc00225d1e0) (0xc002a36d20) Stream added, broadcasting: 1 I0519 23:46:19.466306 7 log.go:172] (0xc00225d1e0) Reply frame received for 1 I0519 23:46:19.466355 7 log.go:172] (0xc00225d1e0) (0xc002a36dc0) Create stream I0519 23:46:19.466365 7 log.go:172] (0xc00225d1e0) (0xc002a36dc0) Stream added, broadcasting: 3 I0519 23:46:19.467181 7 log.go:172] (0xc00225d1e0) Reply frame received for 3 I0519 23:46:19.467223 7 log.go:172] (0xc00225d1e0) (0xc002b8c3c0) Create stream I0519 23:46:19.467238 7 log.go:172] (0xc00225d1e0) (0xc002b8c3c0) Stream added, broadcasting: 5 I0519 23:46:19.467976 7 log.go:172] (0xc00225d1e0) Reply frame received for 5 I0519 23:46:19.525502 7 log.go:172] (0xc00225d1e0) Data frame received for 5 I0519 23:46:19.525548 7 log.go:172] (0xc00225d1e0) Data frame received for 3 I0519 23:46:19.525595 7 log.go:172] (0xc002a36dc0) (3) Data frame handling I0519 23:46:19.525616 7 log.go:172] (0xc002a36dc0) (3) Data frame sent I0519 23:46:19.525634 7 log.go:172] (0xc00225d1e0) Data frame received for 3 I0519 23:46:19.525657 7 log.go:172] (0xc002a36dc0) (3) Data frame handling I0519 23:46:19.525686 7 log.go:172] (0xc002b8c3c0) (5) Data frame handling I0519 23:46:19.527010 7 log.go:172] (0xc00225d1e0) Data frame received for 1 I0519 23:46:19.527080 7 log.go:172] (0xc002a36d20) (1) Data frame handling I0519 23:46:19.527134 7 log.go:172] (0xc002a36d20) (1) Data frame sent I0519 23:46:19.527169 7 log.go:172] (0xc00225d1e0) (0xc002a36d20) Stream removed, broadcasting: 1 I0519 23:46:19.527297 7 log.go:172] (0xc00225d1e0) (0xc002a36d20) Stream removed, broadcasting: 1 I0519 23:46:19.527322 7 log.go:172] (0xc00225d1e0) (0xc002a36dc0) Stream removed, broadcasting: 3 I0519 23:46:19.527412 7 log.go:172] (0xc00225d1e0) Go away received I0519 23:46:19.527638 7 log.go:172] (0xc00225d1e0) (0xc002b8c3c0) Stream removed, broadcasting: 5 May 19 23:46:19.527: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 19 23:46:19.527: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8428 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 23:46:19.527: INFO: >>> kubeConfig: /root/.kube/config I0519 23:46:19.555484 7 log.go:172] (0xc001a61d90) (0xc002b8c640) Create stream I0519 23:46:19.555514 7 log.go:172] (0xc001a61d90) (0xc002b8c640) Stream added, broadcasting: 1 I0519 23:46:19.558188 7 log.go:172] (0xc001a61d90) Reply frame received for 1 I0519 23:46:19.558255 7 log.go:172] (0xc001a61d90) (0xc002b134a0) Create stream I0519 23:46:19.558282 7 log.go:172] (0xc001a61d90) (0xc002b134a0) Stream added, broadcasting: 3 I0519 23:46:19.559288 7 log.go:172] (0xc001a61d90) Reply frame received for 3 I0519 23:46:19.559313 7 log.go:172] (0xc001a61d90) (0xc002b8c6e0) Create stream I0519 23:46:19.559324 7 log.go:172] (0xc001a61d90) (0xc002b8c6e0) Stream added, broadcasting: 5 I0519 23:46:19.560177 7 log.go:172] (0xc001a61d90) Reply frame received for 5 I0519 23:46:19.621108 7 log.go:172] (0xc001a61d90) Data frame received for 3 I0519 23:46:19.621290 7 log.go:172] (0xc002b134a0) (3) Data frame handling I0519 23:46:19.621305 7 log.go:172] (0xc002b134a0) (3) Data frame sent I0519 23:46:19.621315 7 log.go:172] (0xc001a61d90) Data frame received for 3 I0519 23:46:19.621327 7 log.go:172] (0xc002b134a0) (3) Data frame handling I0519 23:46:19.621362 7 log.go:172] (0xc001a61d90) Data frame received for 5 I0519 23:46:19.621378 7 log.go:172] (0xc002b8c6e0) (5) Data frame handling I0519 23:46:19.622861 7 log.go:172] (0xc001a61d90) Data frame received for 1 I0519 23:46:19.622901 7 log.go:172] (0xc002b8c640) (1) Data frame handling I0519 23:46:19.622937 7 log.go:172] (0xc002b8c640) (1) Data frame sent I0519 23:46:19.623008 7 log.go:172] (0xc001a61d90) (0xc002b8c640) Stream removed, broadcasting: 1 I0519 23:46:19.623108 7 log.go:172] (0xc001a61d90) (0xc002b8c640) Stream removed, broadcasting: 1 I0519 23:46:19.623143 7 log.go:172] (0xc001a61d90) (0xc002b134a0) Stream removed, broadcasting: 3 I0519 23:46:19.623164 7 log.go:172] (0xc001a61d90) (0xc002b8c6e0) Stream removed, broadcasting: 5 May 19 23:46:19.623: INFO: Exec stderr: "" I0519 23:46:19.623219 7 log.go:172] (0xc001a61d90) Go away received May 19 23:46:19.623: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8428 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 23:46:19.623: INFO: >>> kubeConfig: /root/.kube/config I0519 23:46:19.651134 7 log.go:172] (0xc002bfcb00) (0xc0021a50e0) Create stream I0519 23:46:19.651170 7 log.go:172] (0xc002bfcb00) (0xc0021a50e0) Stream added, broadcasting: 1 I0519 23:46:19.653963 7 log.go:172] (0xc002bfcb00) Reply frame received for 1 I0519 23:46:19.654023 7 log.go:172] (0xc002bfcb00) (0xc002b13540) Create stream I0519 23:46:19.654060 7 log.go:172] (0xc002bfcb00) (0xc002b13540) Stream added, broadcasting: 3 I0519 23:46:19.655172 7 log.go:172] (0xc002bfcb00) Reply frame received for 3 I0519 23:46:19.655227 7 log.go:172] (0xc002bfcb00) (0xc0021a5180) Create stream I0519 23:46:19.655242 7 log.go:172] (0xc002bfcb00) (0xc0021a5180) Stream added, broadcasting: 5 I0519 23:46:19.656224 7 log.go:172] (0xc002bfcb00) Reply frame received for 5 I0519 23:46:19.721216 7 log.go:172] (0xc002bfcb00) Data frame received for 3 I0519 23:46:19.721246 7 log.go:172] (0xc002b13540) (3) Data frame handling I0519 23:46:19.721255 7 log.go:172] (0xc002b13540) (3) Data frame sent I0519 23:46:19.721262 7 log.go:172] (0xc002bfcb00) Data frame received for 3 I0519 23:46:19.721267 7 log.go:172] (0xc002b13540) (3) Data frame handling I0519 23:46:19.721485 7 log.go:172] (0xc002bfcb00) Data frame received for 5 I0519 23:46:19.721503 7 log.go:172] (0xc0021a5180) (5) Data frame handling I0519 23:46:19.722699 7 log.go:172] (0xc002bfcb00) Data frame received for 1 I0519 23:46:19.722719 7 log.go:172] (0xc0021a50e0) (1) Data frame handling I0519 23:46:19.722731 7 log.go:172] (0xc0021a50e0) (1) Data frame sent I0519 23:46:19.722745 7 log.go:172] (0xc002bfcb00) (0xc0021a50e0) Stream removed, broadcasting: 1 I0519 23:46:19.722769 7 log.go:172] (0xc002bfcb00) Go away received I0519 23:46:19.722840 7 log.go:172] (0xc002bfcb00) (0xc0021a50e0) Stream removed, broadcasting: 1 I0519 23:46:19.722868 7 log.go:172] (0xc002bfcb00) (0xc002b13540) Stream removed, broadcasting: 3 I0519 23:46:19.722884 7 log.go:172] (0xc002bfcb00) (0xc0021a5180) Stream removed, broadcasting: 5 May 19 23:46:19.722: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 19 23:46:19.722: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8428 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 23:46:19.722: INFO: >>> kubeConfig: /root/.kube/config I0519 23:46:19.761956 7 log.go:172] (0xc002bfd130) (0xc0021a5360) Create stream I0519 23:46:19.761986 7 log.go:172] (0xc002bfd130) (0xc0021a5360) Stream added, broadcasting: 1 I0519 23:46:19.764546 7 log.go:172] (0xc002bfd130) Reply frame received for 1 I0519 23:46:19.764597 7 log.go:172] (0xc002bfd130) (0xc002b8c780) Create stream I0519 23:46:19.764620 7 log.go:172] (0xc002bfd130) (0xc002b8c780) Stream added, broadcasting: 3 I0519 23:46:19.765720 7 log.go:172] (0xc002bfd130) Reply frame received for 3 I0519 23:46:19.765823 7 log.go:172] (0xc002bfd130) (0xc002b8c820) Create stream I0519 23:46:19.765849 7 log.go:172] (0xc002bfd130) (0xc002b8c820) Stream added, broadcasting: 5 I0519 23:46:19.766900 7 log.go:172] (0xc002bfd130) Reply frame received for 5 I0519 23:46:19.837624 7 log.go:172] (0xc002bfd130) Data frame received for 5 I0519 23:46:19.837651 7 log.go:172] (0xc002b8c820) (5) Data frame handling I0519 23:46:19.837678 7 log.go:172] (0xc002bfd130) Data frame received for 3 I0519 23:46:19.837692 7 log.go:172] (0xc002b8c780) (3) Data frame handling I0519 23:46:19.837702 7 log.go:172] (0xc002b8c780) (3) Data frame sent I0519 23:46:19.837714 7 log.go:172] (0xc002bfd130) Data frame received for 3 I0519 23:46:19.837726 7 log.go:172] (0xc002b8c780) (3) Data frame handling I0519 23:46:19.839134 7 log.go:172] (0xc002bfd130) Data frame received for 1 I0519 23:46:19.839164 7 log.go:172] (0xc0021a5360) (1) Data frame handling I0519 23:46:19.839197 7 log.go:172] (0xc0021a5360) (1) Data frame sent I0519 23:46:19.839220 7 log.go:172] (0xc002bfd130) (0xc0021a5360) Stream removed, broadcasting: 1 I0519 23:46:19.839324 7 log.go:172] (0xc002bfd130) (0xc0021a5360) Stream removed, broadcasting: 1 I0519 23:46:19.839341 7 log.go:172] (0xc002bfd130) (0xc002b8c780) Stream removed, broadcasting: 3 I0519 23:46:19.839353 7 log.go:172] (0xc002bfd130) (0xc002b8c820) Stream removed, broadcasting: 5 May 19 23:46:19.839: INFO: Exec stderr: "" May 19 23:46:19.839: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8428 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 23:46:19.839: INFO: >>> kubeConfig: /root/.kube/config I0519 23:46:19.840020 7 log.go:172] (0xc002bfd130) Go away received I0519 23:46:19.873058 7 log.go:172] (0xc003838420) (0xc002b8cb40) Create stream I0519 23:46:19.873086 7 log.go:172] (0xc003838420) (0xc002b8cb40) Stream added, broadcasting: 1 I0519 23:46:19.875921 7 log.go:172] (0xc003838420) Reply frame received for 1 I0519 23:46:19.875960 7 log.go:172] (0xc003838420) (0xc002cb9900) Create stream I0519 23:46:19.875974 7 log.go:172] (0xc003838420) (0xc002cb9900) Stream added, broadcasting: 3 I0519 23:46:19.876760 7 log.go:172] (0xc003838420) Reply frame received for 3 I0519 23:46:19.876796 7 log.go:172] (0xc003838420) (0xc002cb9ae0) Create stream I0519 23:46:19.876807 7 log.go:172] (0xc003838420) (0xc002cb9ae0) Stream added, broadcasting: 5 I0519 23:46:19.877987 7 log.go:172] (0xc003838420) Reply frame received for 5 I0519 23:46:19.947515 7 log.go:172] (0xc003838420) Data frame received for 5 I0519 23:46:19.947556 7 log.go:172] (0xc002cb9ae0) (5) Data frame handling I0519 23:46:19.947581 7 log.go:172] (0xc003838420) Data frame received for 3 I0519 23:46:19.947596 7 log.go:172] (0xc002cb9900) (3) Data frame handling I0519 23:46:19.947606 7 log.go:172] (0xc002cb9900) (3) Data frame sent I0519 23:46:19.947627 7 log.go:172] (0xc003838420) Data frame received for 3 I0519 23:46:19.947636 7 log.go:172] (0xc002cb9900) (3) Data frame handling I0519 23:46:19.948657 7 log.go:172] (0xc003838420) Data frame received for 1 I0519 23:46:19.948685 7 log.go:172] (0xc002b8cb40) (1) Data frame handling I0519 23:46:19.948699 7 log.go:172] (0xc002b8cb40) (1) Data frame sent I0519 23:46:19.948721 7 log.go:172] (0xc003838420) (0xc002b8cb40) Stream removed, broadcasting: 1 I0519 23:46:19.948749 7 log.go:172] (0xc003838420) Go away received I0519 23:46:19.948966 7 log.go:172] (0xc003838420) (0xc002b8cb40) Stream removed, broadcasting: 1 I0519 23:46:19.948997 7 log.go:172] (0xc003838420) (0xc002cb9900) Stream removed, broadcasting: 3 I0519 23:46:19.949007 7 log.go:172] (0xc003838420) (0xc002cb9ae0) Stream removed, broadcasting: 5 May 19 23:46:19.949: INFO: Exec stderr: "" May 19 23:46:19.949: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8428 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 23:46:19.949: INFO: >>> kubeConfig: /root/.kube/config I0519 23:46:19.976263 7 log.go:172] (0xc00225d810) (0xc002a36fa0) Create stream I0519 23:46:19.976297 7 log.go:172] (0xc00225d810) (0xc002a36fa0) Stream added, broadcasting: 1 I0519 23:46:19.978962 7 log.go:172] (0xc00225d810) Reply frame received for 1 I0519 23:46:19.979004 7 log.go:172] (0xc00225d810) (0xc002b135e0) Create stream I0519 23:46:19.979031 7 log.go:172] (0xc00225d810) (0xc002b135e0) Stream added, broadcasting: 3 I0519 23:46:19.980163 7 log.go:172] (0xc00225d810) Reply frame received for 3 I0519 23:46:19.980223 7 log.go:172] (0xc00225d810) (0xc002b13680) Create stream I0519 23:46:19.980243 7 log.go:172] (0xc00225d810) (0xc002b13680) Stream added, broadcasting: 5 I0519 23:46:19.981555 7 log.go:172] (0xc00225d810) Reply frame received for 5 I0519 23:46:20.062437 7 log.go:172] (0xc00225d810) Data frame received for 3 I0519 23:46:20.062477 7 log.go:172] (0xc002b135e0) (3) Data frame handling I0519 23:46:20.062504 7 log.go:172] (0xc002b135e0) (3) Data frame sent I0519 23:46:20.062559 7 log.go:172] (0xc00225d810) Data frame received for 3 I0519 23:46:20.062581 7 log.go:172] (0xc002b135e0) (3) Data frame handling I0519 23:46:20.062606 7 log.go:172] (0xc00225d810) Data frame received for 5 I0519 23:46:20.062631 7 log.go:172] (0xc002b13680) (5) Data frame handling I0519 23:46:20.063788 7 log.go:172] (0xc00225d810) Data frame received for 1 I0519 23:46:20.063816 7 log.go:172] (0xc002a36fa0) (1) Data frame handling I0519 23:46:20.063833 7 log.go:172] (0xc002a36fa0) (1) Data frame sent I0519 23:46:20.063852 7 log.go:172] (0xc00225d810) (0xc002a36fa0) Stream removed, broadcasting: 1 I0519 23:46:20.063874 7 log.go:172] (0xc00225d810) Go away received I0519 23:46:20.063984 7 log.go:172] (0xc00225d810) (0xc002a36fa0) Stream removed, broadcasting: 1 I0519 23:46:20.064003 7 log.go:172] (0xc00225d810) (0xc002b135e0) Stream removed, broadcasting: 3 I0519 23:46:20.064018 7 log.go:172] (0xc00225d810) (0xc002b13680) Stream removed, broadcasting: 5 May 19 23:46:20.064: INFO: Exec stderr: "" May 19 23:46:20.064: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8428 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 23:46:20.064: INFO: >>> kubeConfig: /root/.kube/config I0519 23:46:20.094027 7 log.go:172] (0xc002942000) (0xc002662000) Create stream I0519 23:46:20.094055 7 log.go:172] (0xc002942000) (0xc002662000) Stream added, broadcasting: 1 I0519 23:46:20.096067 7 log.go:172] (0xc002942000) Reply frame received for 1 I0519 23:46:20.096108 7 log.go:172] (0xc002942000) (0xc0026620a0) Create stream I0519 23:46:20.096118 7 log.go:172] (0xc002942000) (0xc0026620a0) Stream added, broadcasting: 3 I0519 23:46:20.096965 7 log.go:172] (0xc002942000) Reply frame received for 3 I0519 23:46:20.096992 7 log.go:172] (0xc002942000) (0xc002662140) Create stream I0519 23:46:20.097002 7 log.go:172] (0xc002942000) (0xc002662140) Stream added, broadcasting: 5 I0519 23:46:20.097970 7 log.go:172] (0xc002942000) Reply frame received for 5 I0519 23:46:20.164089 7 log.go:172] (0xc002942000) Data frame received for 5 I0519 23:46:20.164126 7 log.go:172] (0xc002662140) (5) Data frame handling I0519 23:46:20.164151 7 log.go:172] (0xc002942000) Data frame received for 3 I0519 23:46:20.164160 7 log.go:172] (0xc0026620a0) (3) Data frame handling I0519 23:46:20.164170 7 log.go:172] (0xc0026620a0) (3) Data frame sent I0519 23:46:20.164179 7 log.go:172] (0xc002942000) Data frame received for 3 I0519 23:46:20.164186 7 log.go:172] (0xc0026620a0) (3) Data frame handling I0519 23:46:20.165761 7 log.go:172] (0xc002942000) Data frame received for 1 I0519 23:46:20.165826 7 log.go:172] (0xc002662000) (1) Data frame handling I0519 23:46:20.165853 7 log.go:172] (0xc002662000) (1) Data frame sent I0519 23:46:20.165873 7 log.go:172] (0xc002942000) (0xc002662000) Stream removed, broadcasting: 1 I0519 23:46:20.165889 7 log.go:172] (0xc002942000) Go away received I0519 23:46:20.166049 7 log.go:172] (0xc002942000) (0xc002662000) Stream removed, broadcasting: 1 I0519 23:46:20.166069 7 log.go:172] (0xc002942000) (0xc0026620a0) Stream removed, broadcasting: 3 I0519 23:46:20.166079 7 log.go:172] (0xc002942000) (0xc002662140) Stream removed, broadcasting: 5 May 19 23:46:20.166: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:46:20.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8428" for this suite. • [SLOW TEST:11.475 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":40,"skipped":565,"failed":0} [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:46:20.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:46:36.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7985" for this suite. • [SLOW TEST:16.272 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":288,"completed":41,"skipped":565,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:46:36.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 23:46:37.160: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 23:46:39.171: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528797, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528797, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528797, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725528797, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 23:46:42.279: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:46:42.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5960" for this suite. STEP: Destroying namespace "webhook-5960-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.523 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":288,"completed":42,"skipped":578,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:46:42.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments May 19 23:46:43.045: INFO: Waiting up to 5m0s for pod "client-containers-b9e72ca1-9b17-4279-9d4d-16193748627a" in namespace "containers-8636" to be "Succeeded or Failed" May 19 23:46:43.062: INFO: Pod "client-containers-b9e72ca1-9b17-4279-9d4d-16193748627a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.636594ms May 19 23:46:45.066: INFO: Pod "client-containers-b9e72ca1-9b17-4279-9d4d-16193748627a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020681235s May 19 23:46:47.070: INFO: Pod "client-containers-b9e72ca1-9b17-4279-9d4d-16193748627a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024604458s STEP: Saw pod success May 19 23:46:47.070: INFO: Pod "client-containers-b9e72ca1-9b17-4279-9d4d-16193748627a" satisfied condition "Succeeded or Failed" May 19 23:46:47.073: INFO: Trying to get logs from node latest-worker pod client-containers-b9e72ca1-9b17-4279-9d4d-16193748627a container test-container: STEP: delete the pod May 19 23:46:47.318: INFO: Waiting for pod client-containers-b9e72ca1-9b17-4279-9d4d-16193748627a to disappear May 19 23:46:47.324: INFO: Pod client-containers-b9e72ca1-9b17-4279-9d4d-16193748627a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:46:47.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8636" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":288,"completed":43,"skipped":581,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:46:47.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 23:46:47.488: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fef17731-4217-4d93-96c0-d726221967b7" in namespace "downward-api-7551" to be "Succeeded or Failed" May 19 23:46:47.508: INFO: Pod "downwardapi-volume-fef17731-4217-4d93-96c0-d726221967b7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.332272ms May 19 23:46:49.662: INFO: Pod "downwardapi-volume-fef17731-4217-4d93-96c0-d726221967b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17337615s May 19 23:46:51.667: INFO: Pod "downwardapi-volume-fef17731-4217-4d93-96c0-d726221967b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.178990579s STEP: Saw pod success May 19 23:46:51.667: INFO: Pod "downwardapi-volume-fef17731-4217-4d93-96c0-d726221967b7" satisfied condition "Succeeded or Failed" May 19 23:46:51.671: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-fef17731-4217-4d93-96c0-d726221967b7 container client-container: STEP: delete the pod May 19 23:46:51.703: INFO: Waiting for pod downwardapi-volume-fef17731-4217-4d93-96c0-d726221967b7 to disappear May 19 23:46:51.719: INFO: Pod downwardapi-volume-fef17731-4217-4d93-96c0-d726221967b7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:46:51.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7551" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":44,"skipped":616,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:46:51.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 19 23:46:51.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3205' May 19 23:46:52.135: INFO: stderr: "" May 19 23:46:52.135: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 19 23:46:53.139: INFO: Selector matched 1 pods for map[app:agnhost] May 19 23:46:53.139: INFO: Found 0 / 1 May 19 23:46:54.139: INFO: Selector matched 1 pods for map[app:agnhost] May 19 23:46:54.139: INFO: Found 0 / 1 May 19 23:46:55.139: INFO: Selector matched 1 pods for map[app:agnhost] May 19 23:46:55.139: INFO: Found 0 / 1 May 19 23:46:56.139: INFO: Selector matched 1 pods for map[app:agnhost] May 19 23:46:56.139: INFO: Found 1 / 1 May 19 23:46:56.139: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 19 23:46:56.142: INFO: Selector matched 1 pods for map[app:agnhost] May 19 23:46:56.142: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 19 23:46:56.142: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-zhnn6 --namespace=kubectl-3205 -p {"metadata":{"annotations":{"x":"y"}}}' May 19 23:46:56.242: INFO: stderr: "" May 19 23:46:56.242: INFO: stdout: "pod/agnhost-master-zhnn6 patched\n" STEP: checking annotations May 19 23:46:56.315: INFO: Selector matched 1 pods for map[app:agnhost] May 19 23:46:56.315: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:46:56.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3205" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":288,"completed":45,"skipped":628,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:46:56.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:47:13.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2011" for this suite. • [SLOW TEST:17.359 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":288,"completed":46,"skipped":663,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:47:13.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-1c11cbd2-d519-4384-8e87-bf363e4ce3c3 STEP: Creating a pod to test consume configMaps May 19 23:47:13.760: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-af1f1c49-1018-46bb-96ac-dbac9f562d14" in namespace "projected-9196" to be "Succeeded or Failed" May 19 23:47:13.800: INFO: Pod "pod-projected-configmaps-af1f1c49-1018-46bb-96ac-dbac9f562d14": Phase="Pending", Reason="", readiness=false. Elapsed: 39.706006ms May 19 23:47:15.896: INFO: Pod "pod-projected-configmaps-af1f1c49-1018-46bb-96ac-dbac9f562d14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136244246s May 19 23:47:17.901: INFO: Pod "pod-projected-configmaps-af1f1c49-1018-46bb-96ac-dbac9f562d14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140463878s May 19 23:47:19.905: INFO: Pod "pod-projected-configmaps-af1f1c49-1018-46bb-96ac-dbac9f562d14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.145357983s STEP: Saw pod success May 19 23:47:19.906: INFO: Pod "pod-projected-configmaps-af1f1c49-1018-46bb-96ac-dbac9f562d14" satisfied condition "Succeeded or Failed" May 19 23:47:19.911: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-af1f1c49-1018-46bb-96ac-dbac9f562d14 container projected-configmap-volume-test: STEP: delete the pod May 19 23:47:19.943: INFO: Waiting for pod pod-projected-configmaps-af1f1c49-1018-46bb-96ac-dbac9f562d14 to disappear May 19 23:47:19.948: INFO: Pod pod-projected-configmaps-af1f1c49-1018-46bb-96ac-dbac9f562d14 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:47:19.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9196" for this suite. • [SLOW TEST:6.265 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":47,"skipped":684,"failed":0} [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:47:19.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:47:20.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4260" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":288,"completed":48,"skipped":684,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:47:20.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-cf225af7-ea11-40b2-9a5a-2adcec628668 in namespace container-probe-1461 May 19 23:47:24.320: INFO: Started pod busybox-cf225af7-ea11-40b2-9a5a-2adcec628668 in namespace container-probe-1461 STEP: checking the pod's current state and verifying that restartCount is present May 19 23:47:24.322: INFO: Initial restart count of pod busybox-cf225af7-ea11-40b2-9a5a-2adcec628668 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:51:25.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1461" for this suite. • [SLOW TEST:244.897 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":49,"skipped":702,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:51:25.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 19 23:51:25.165: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:51:34.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8035" for this suite. • [SLOW TEST:9.876 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":288,"completed":50,"skipped":707,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:51:34.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-9945 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9945 STEP: creating replication controller externalsvc in namespace services-9945 I0519 23:51:35.276977 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9945, replica count: 2 I0519 23:51:38.327402 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 23:51:41.327658 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 19 23:51:41.375: INFO: Creating new exec pod May 19 23:51:45.403: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9945 execpod6n5sv -- /bin/sh -x -c nslookup nodeport-service' May 19 23:51:48.454: INFO: stderr: "I0519 23:51:48.341687 729 log.go:172] (0xc00083eb00) (0xc00047a960) Create stream\nI0519 23:51:48.341733 729 log.go:172] (0xc00083eb00) (0xc00047a960) Stream added, broadcasting: 1\nI0519 23:51:48.343747 729 log.go:172] (0xc00083eb00) Reply frame received for 1\nI0519 23:51:48.343791 729 log.go:172] (0xc00083eb00) (0xc000680320) Create stream\nI0519 23:51:48.343803 729 log.go:172] (0xc00083eb00) (0xc000680320) Stream added, broadcasting: 3\nI0519 23:51:48.344726 729 log.go:172] (0xc00083eb00) Reply frame received for 3\nI0519 23:51:48.344758 729 log.go:172] (0xc00083eb00) (0xc000386780) Create stream\nI0519 23:51:48.344769 729 log.go:172] (0xc00083eb00) (0xc000386780) Stream added, broadcasting: 5\nI0519 23:51:48.345953 729 log.go:172] (0xc00083eb00) Reply frame received for 5\nI0519 23:51:48.423279 729 log.go:172] (0xc00083eb00) Data frame received for 5\nI0519 23:51:48.423311 729 log.go:172] (0xc000386780) (5) Data frame handling\nI0519 23:51:48.423324 729 log.go:172] (0xc000386780) (5) Data frame sent\n+ nslookup nodeport-service\nI0519 23:51:48.443445 729 log.go:172] (0xc00083eb00) Data frame received for 3\nI0519 23:51:48.443472 729 log.go:172] (0xc000680320) (3) Data frame handling\nI0519 23:51:48.443493 729 log.go:172] (0xc000680320) (3) Data frame sent\nI0519 23:51:48.444263 729 log.go:172] (0xc00083eb00) Data frame received for 3\nI0519 23:51:48.444279 729 log.go:172] (0xc000680320) (3) Data frame handling\nI0519 23:51:48.444288 729 log.go:172] (0xc000680320) (3) Data frame sent\nI0519 23:51:48.444709 729 log.go:172] (0xc00083eb00) Data frame received for 5\nI0519 23:51:48.444727 729 log.go:172] (0xc000386780) (5) Data frame handling\nI0519 23:51:48.444741 729 log.go:172] (0xc00083eb00) Data frame received for 3\nI0519 23:51:48.444760 729 log.go:172] (0xc000680320) (3) Data frame handling\nI0519 23:51:48.446929 729 log.go:172] (0xc00083eb00) Data frame received for 1\nI0519 23:51:48.446955 729 log.go:172] (0xc00047a960) (1) Data frame handling\nI0519 23:51:48.446972 729 log.go:172] (0xc00047a960) (1) Data frame sent\nI0519 23:51:48.447001 729 log.go:172] (0xc00083eb00) (0xc00047a960) Stream removed, broadcasting: 1\nI0519 23:51:48.447199 729 log.go:172] (0xc00083eb00) Go away received\nI0519 23:51:48.447523 729 log.go:172] (0xc00083eb00) (0xc00047a960) Stream removed, broadcasting: 1\nI0519 23:51:48.447541 729 log.go:172] (0xc00083eb00) (0xc000680320) Stream removed, broadcasting: 3\nI0519 23:51:48.447552 729 log.go:172] (0xc00083eb00) (0xc000386780) Stream removed, broadcasting: 5\n" May 19 23:51:48.454: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-9945.svc.cluster.local\tcanonical name = externalsvc.services-9945.svc.cluster.local.\nName:\texternalsvc.services-9945.svc.cluster.local\nAddress: 10.96.189.26\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9945, will wait for the garbage collector to delete the pods May 19 23:51:48.515: INFO: Deleting ReplicationController externalsvc took: 7.040579ms May 19 23:51:48.815: INFO: Terminating ReplicationController externalsvc pods took: 300.275ms May 19 23:51:55.504: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:51:55.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9945" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:20.648 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":288,"completed":51,"skipped":718,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:51:55.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 23:51:55.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f56cedd-997e-4c97-b398-b5bd97b06acf" in namespace "downward-api-7932" to be "Succeeded or Failed" May 19 23:51:55.648: INFO: Pod "downwardapi-volume-4f56cedd-997e-4c97-b398-b5bd97b06acf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.506334ms May 19 23:51:57.653: INFO: Pod "downwardapi-volume-4f56cedd-997e-4c97-b398-b5bd97b06acf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008170611s May 19 23:51:59.657: INFO: Pod "downwardapi-volume-4f56cedd-997e-4c97-b398-b5bd97b06acf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01274595s STEP: Saw pod success May 19 23:51:59.657: INFO: Pod "downwardapi-volume-4f56cedd-997e-4c97-b398-b5bd97b06acf" satisfied condition "Succeeded or Failed" May 19 23:51:59.660: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-4f56cedd-997e-4c97-b398-b5bd97b06acf container client-container: STEP: delete the pod May 19 23:51:59.722: INFO: Waiting for pod downwardapi-volume-4f56cedd-997e-4c97-b398-b5bd97b06acf to disappear May 19 23:51:59.732: INFO: Pod downwardapi-volume-4f56cedd-997e-4c97-b398-b5bd97b06acf no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:51:59.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7932" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":52,"skipped":726,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:51:59.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 23:51:59.837: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f89a79f-d72c-47d6-9bdf-160d8e83f6a0" in namespace "downward-api-2552" to be "Succeeded or Failed" May 19 23:51:59.872: INFO: Pod "downwardapi-volume-4f89a79f-d72c-47d6-9bdf-160d8e83f6a0": Phase="Pending", Reason="", readiness=false. Elapsed: 34.510284ms May 19 23:52:01.876: INFO: Pod "downwardapi-volume-4f89a79f-d72c-47d6-9bdf-160d8e83f6a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038779292s May 19 23:52:03.880: INFO: Pod "downwardapi-volume-4f89a79f-d72c-47d6-9bdf-160d8e83f6a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042852003s STEP: Saw pod success May 19 23:52:03.880: INFO: Pod "downwardapi-volume-4f89a79f-d72c-47d6-9bdf-160d8e83f6a0" satisfied condition "Succeeded or Failed" May 19 23:52:03.883: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-4f89a79f-d72c-47d6-9bdf-160d8e83f6a0 container client-container: STEP: delete the pod May 19 23:52:03.923: INFO: Waiting for pod downwardapi-volume-4f89a79f-d72c-47d6-9bdf-160d8e83f6a0 to disappear May 19 23:52:03.930: INFO: Pod downwardapi-volume-4f89a79f-d72c-47d6-9bdf-160d8e83f6a0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:52:03.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2552" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":288,"completed":53,"skipped":748,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:52:03.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7080 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7080 I0519 23:52:04.097589 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7080, replica count: 2 I0519 23:52:07.148029 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 23:52:10.148268 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 23:52:10.148: INFO: Creating new exec pod May 19 23:52:15.163: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7080 execpodw6gfk -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 19 23:52:15.381: INFO: stderr: "I0519 23:52:15.298253 760 log.go:172] (0xc000a7d4a0) (0xc0006d06e0) Create stream\nI0519 23:52:15.298302 760 log.go:172] (0xc000a7d4a0) (0xc0006d06e0) Stream added, broadcasting: 1\nI0519 23:52:15.300220 760 log.go:172] (0xc000a7d4a0) Reply frame received for 1\nI0519 23:52:15.300270 760 log.go:172] (0xc000a7d4a0) (0xc00040d680) Create stream\nI0519 23:52:15.300283 760 log.go:172] (0xc000a7d4a0) (0xc00040d680) Stream added, broadcasting: 3\nI0519 23:52:15.301356 760 log.go:172] (0xc000a7d4a0) Reply frame received for 3\nI0519 23:52:15.301416 760 log.go:172] (0xc000a7d4a0) (0xc0006d1040) Create stream\nI0519 23:52:15.301438 760 log.go:172] (0xc000a7d4a0) (0xc0006d1040) Stream added, broadcasting: 5\nI0519 23:52:15.302329 760 log.go:172] (0xc000a7d4a0) Reply frame received for 5\nI0519 23:52:15.373999 760 log.go:172] (0xc000a7d4a0) Data frame received for 5\nI0519 23:52:15.374031 760 log.go:172] (0xc0006d1040) (5) Data frame handling\nI0519 23:52:15.374050 760 log.go:172] (0xc0006d1040) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0519 23:52:15.374069 760 log.go:172] (0xc000a7d4a0) Data frame received for 5\nI0519 23:52:15.374104 760 log.go:172] (0xc0006d1040) (5) Data frame handling\nI0519 23:52:15.374125 760 log.go:172] (0xc000a7d4a0) Data frame received for 3\nI0519 23:52:15.374132 760 log.go:172] (0xc00040d680) (3) Data frame handling\nI0519 23:52:15.375821 760 log.go:172] (0xc000a7d4a0) Data frame received for 1\nI0519 23:52:15.375848 760 log.go:172] (0xc0006d06e0) (1) Data frame handling\nI0519 23:52:15.375871 760 log.go:172] (0xc0006d06e0) (1) Data frame sent\nI0519 23:52:15.375886 760 log.go:172] (0xc000a7d4a0) (0xc0006d06e0) Stream removed, broadcasting: 1\nI0519 23:52:15.375898 760 log.go:172] (0xc000a7d4a0) Go away received\nI0519 23:52:15.376550 760 log.go:172] (0xc000a7d4a0) (0xc0006d06e0) Stream removed, broadcasting: 1\nI0519 23:52:15.376576 760 log.go:172] (0xc000a7d4a0) (0xc00040d680) Stream removed, broadcasting: 3\nI0519 23:52:15.376589 760 log.go:172] (0xc000a7d4a0) (0xc0006d1040) Stream removed, broadcasting: 5\n" May 19 23:52:15.382: INFO: stdout: "" May 19 23:52:15.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7080 execpodw6gfk -- /bin/sh -x -c nc -zv -t -w 2 10.106.64.108 80' May 19 23:52:15.588: INFO: stderr: "I0519 23:52:15.514613 781 log.go:172] (0xc000c62e70) (0xc00012aa00) Create stream\nI0519 23:52:15.514665 781 log.go:172] (0xc000c62e70) (0xc00012aa00) Stream added, broadcasting: 1\nI0519 23:52:15.516674 781 log.go:172] (0xc000c62e70) Reply frame received for 1\nI0519 23:52:15.516729 781 log.go:172] (0xc000c62e70) (0xc00025e1e0) Create stream\nI0519 23:52:15.516752 781 log.go:172] (0xc000c62e70) (0xc00025e1e0) Stream added, broadcasting: 3\nI0519 23:52:15.518037 781 log.go:172] (0xc000c62e70) Reply frame received for 3\nI0519 23:52:15.518095 781 log.go:172] (0xc000c62e70) (0xc00025e960) Create stream\nI0519 23:52:15.518112 781 log.go:172] (0xc000c62e70) (0xc00025e960) Stream added, broadcasting: 5\nI0519 23:52:15.519050 781 log.go:172] (0xc000c62e70) Reply frame received for 5\nI0519 23:52:15.580920 781 log.go:172] (0xc000c62e70) Data frame received for 5\nI0519 23:52:15.580950 781 log.go:172] (0xc00025e960) (5) Data frame handling\nI0519 23:52:15.580969 781 log.go:172] (0xc00025e960) (5) Data frame sent\nI0519 23:52:15.580979 781 log.go:172] (0xc000c62e70) Data frame received for 5\nI0519 23:52:15.580988 781 log.go:172] (0xc00025e960) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.64.108 80\nConnection to 10.106.64.108 80 port [tcp/http] succeeded!\nI0519 23:52:15.581028 781 log.go:172] (0xc000c62e70) Data frame received for 3\nI0519 23:52:15.581054 781 log.go:172] (0xc00025e1e0) (3) Data frame handling\nI0519 23:52:15.582946 781 log.go:172] (0xc000c62e70) Data frame received for 1\nI0519 23:52:15.582968 781 log.go:172] (0xc00012aa00) (1) Data frame handling\nI0519 23:52:15.582980 781 log.go:172] (0xc00012aa00) (1) Data frame sent\nI0519 23:52:15.583129 781 log.go:172] (0xc000c62e70) (0xc00012aa00) Stream removed, broadcasting: 1\nI0519 23:52:15.583166 781 log.go:172] (0xc000c62e70) Go away received\nI0519 23:52:15.583574 781 log.go:172] (0xc000c62e70) (0xc00012aa00) Stream removed, broadcasting: 1\nI0519 23:52:15.583598 781 log.go:172] (0xc000c62e70) (0xc00025e1e0) Stream removed, broadcasting: 3\nI0519 23:52:15.583608 781 log.go:172] (0xc000c62e70) (0xc00025e960) Stream removed, broadcasting: 5\n" May 19 23:52:15.588: INFO: stdout: "" May 19 23:52:15.588: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:52:15.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7080" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:11.685 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":288,"completed":54,"skipped":749,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:52:15.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info May 19 23:52:15.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' May 19 23:52:15.833: INFO: stderr: "" May 19 23:52:15.833: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:52:15.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8420" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":288,"completed":55,"skipped":750,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:52:15.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 19 23:52:15.965: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7710 /api/v1/namespaces/watch-7710/configmaps/e2e-watch-test-watch-closed 1b8bcfaa-3559-4170-bf7a-c7c20d39c3b1 6077224 0 2020-05-19 23:52:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-19 23:52:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 19 23:52:15.965: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7710 /api/v1/namespaces/watch-7710/configmaps/e2e-watch-test-watch-closed 1b8bcfaa-3559-4170-bf7a-c7c20d39c3b1 6077225 0 2020-05-19 23:52:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-19 23:52:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 19 23:52:15.981: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7710 /api/v1/namespaces/watch-7710/configmaps/e2e-watch-test-watch-closed 1b8bcfaa-3559-4170-bf7a-c7c20d39c3b1 6077226 0 2020-05-19 23:52:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-19 23:52:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 19 23:52:15.981: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7710 /api/v1/namespaces/watch-7710/configmaps/e2e-watch-test-watch-closed 1b8bcfaa-3559-4170-bf7a-c7c20d39c3b1 6077227 0 2020-05-19 23:52:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-19 23:52:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:52:15.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7710" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":288,"completed":56,"skipped":802,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:52:15.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7756 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7756 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7756 May 19 23:52:16.122: INFO: Found 0 stateful pods, waiting for 1 May 19 23:52:26.127: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 19 23:52:26.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7756 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 19 23:52:26.393: INFO: stderr: "I0519 23:52:26.263838 821 log.go:172] (0xc000b800b0) (0xc000752780) Create stream\nI0519 23:52:26.263900 821 log.go:172] (0xc000b800b0) (0xc000752780) Stream added, broadcasting: 1\nI0519 23:52:26.266973 821 log.go:172] (0xc000b800b0) Reply frame received for 1\nI0519 23:52:26.267019 821 log.go:172] (0xc000b800b0) (0xc000752fa0) Create stream\nI0519 23:52:26.267031 821 log.go:172] (0xc000b800b0) (0xc000752fa0) Stream added, broadcasting: 3\nI0519 23:52:26.268216 821 log.go:172] (0xc000b800b0) Reply frame received for 3\nI0519 23:52:26.268245 821 log.go:172] (0xc000b800b0) (0xc0001379a0) Create stream\nI0519 23:52:26.268260 821 log.go:172] (0xc000b800b0) (0xc0001379a0) Stream added, broadcasting: 5\nI0519 23:52:26.271746 821 log.go:172] (0xc000b800b0) Reply frame received for 5\nI0519 23:52:26.352483 821 log.go:172] (0xc000b800b0) Data frame received for 5\nI0519 23:52:26.352505 821 log.go:172] (0xc0001379a0) (5) Data frame handling\nI0519 23:52:26.352518 821 log.go:172] (0xc0001379a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0519 23:52:26.383846 821 log.go:172] (0xc000b800b0) Data frame received for 3\nI0519 23:52:26.383892 821 log.go:172] (0xc000752fa0) (3) Data frame handling\nI0519 23:52:26.383920 821 log.go:172] (0xc000752fa0) (3) Data frame sent\nI0519 23:52:26.383942 821 log.go:172] (0xc000b800b0) Data frame received for 3\nI0519 23:52:26.383953 821 log.go:172] (0xc000752fa0) (3) Data frame handling\nI0519 23:52:26.384015 821 log.go:172] (0xc000b800b0) Data frame received for 5\nI0519 23:52:26.384048 821 log.go:172] (0xc0001379a0) (5) Data frame handling\nI0519 23:52:26.385976 821 log.go:172] (0xc000b800b0) Data frame received for 1\nI0519 23:52:26.386018 821 log.go:172] (0xc000752780) (1) Data frame handling\nI0519 23:52:26.386056 821 log.go:172] (0xc000752780) (1) Data frame sent\nI0519 23:52:26.386093 821 log.go:172] (0xc000b800b0) (0xc000752780) Stream removed, broadcasting: 1\nI0519 23:52:26.386134 821 log.go:172] (0xc000b800b0) Go away received\nI0519 23:52:26.386648 821 log.go:172] (0xc000b800b0) (0xc000752780) Stream removed, broadcasting: 1\nI0519 23:52:26.386684 821 log.go:172] (0xc000b800b0) (0xc000752fa0) Stream removed, broadcasting: 3\nI0519 23:52:26.386722 821 log.go:172] (0xc000b800b0) (0xc0001379a0) Stream removed, broadcasting: 5\n" May 19 23:52:26.393: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 19 23:52:26.393: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 19 23:52:26.397: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 19 23:52:36.402: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 19 23:52:36.402: INFO: Waiting for statefulset status.replicas updated to 0 May 19 23:52:36.432: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999538s May 19 23:52:37.438: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.98183076s May 19 23:52:38.462: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.975696254s May 19 23:52:39.466: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.951515877s May 19 23:52:40.486: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.947497302s May 19 23:52:41.490: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.927577234s May 19 23:52:42.498: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.923264556s May 19 23:52:43.504: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.915475434s May 19 23:52:44.508: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.909728973s May 19 23:52:45.513: INFO: Verifying statefulset ss doesn't scale past 1 for another 904.697077ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7756 May 19 23:52:46.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7756 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 19 23:52:46.766: INFO: stderr: "I0519 23:52:46.656634 841 log.go:172] (0xc000b3c000) (0xc000139540) Create stream\nI0519 23:52:46.656708 841 log.go:172] (0xc000b3c000) (0xc000139540) Stream added, broadcasting: 1\nI0519 23:52:46.659349 841 log.go:172] (0xc000b3c000) Reply frame received for 1\nI0519 23:52:46.659394 841 log.go:172] (0xc000b3c000) (0xc00022e000) Create stream\nI0519 23:52:46.659405 841 log.go:172] (0xc000b3c000) (0xc00022e000) Stream added, broadcasting: 3\nI0519 23:52:46.660267 841 log.go:172] (0xc000b3c000) Reply frame received for 3\nI0519 23:52:46.660314 841 log.go:172] (0xc000b3c000) (0xc00049e320) Create stream\nI0519 23:52:46.660333 841 log.go:172] (0xc000b3c000) (0xc00049e320) Stream added, broadcasting: 5\nI0519 23:52:46.661287 841 log.go:172] (0xc000b3c000) Reply frame received for 5\nI0519 23:52:46.760869 841 log.go:172] (0xc000b3c000) Data frame received for 3\nI0519 23:52:46.760899 841 log.go:172] (0xc00022e000) (3) Data frame handling\nI0519 23:52:46.760923 841 log.go:172] (0xc000b3c000) Data frame received for 5\nI0519 23:52:46.760949 841 log.go:172] (0xc00049e320) (5) Data frame handling\nI0519 23:52:46.760967 841 log.go:172] (0xc00049e320) (5) Data frame sent\nI0519 23:52:46.760993 841 log.go:172] (0xc000b3c000) Data frame received for 5\nI0519 23:52:46.761008 841 log.go:172] (0xc00049e320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0519 23:52:46.761050 841 log.go:172] (0xc00022e000) (3) Data frame sent\nI0519 23:52:46.761071 841 log.go:172] (0xc000b3c000) Data frame received for 3\nI0519 23:52:46.761084 841 log.go:172] (0xc00022e000) (3) Data frame handling\nI0519 23:52:46.762749 841 log.go:172] (0xc000b3c000) Data frame received for 1\nI0519 23:52:46.762761 841 log.go:172] (0xc000139540) (1) Data frame handling\nI0519 23:52:46.762768 841 log.go:172] (0xc000139540) (1) Data frame sent\nI0519 23:52:46.762775 841 log.go:172] (0xc000b3c000) (0xc000139540) Stream removed, broadcasting: 1\nI0519 23:52:46.762985 841 log.go:172] (0xc000b3c000) Go away received\nI0519 23:52:46.763049 841 log.go:172] (0xc000b3c000) (0xc000139540) Stream removed, broadcasting: 1\nI0519 23:52:46.763061 841 log.go:172] (0xc000b3c000) (0xc00022e000) Stream removed, broadcasting: 3\nI0519 23:52:46.763066 841 log.go:172] (0xc000b3c000) (0xc00049e320) Stream removed, broadcasting: 5\n" May 19 23:52:46.767: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 19 23:52:46.767: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 19 23:52:46.770: INFO: Found 1 stateful pods, waiting for 3 May 19 23:52:56.775: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 19 23:52:56.775: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 19 23:52:56.775: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 19 23:52:56.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7756 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 19 23:52:57.026: INFO: stderr: "I0519 23:52:56.919335 861 log.go:172] (0xc000b26e70) (0xc00031ac80) Create stream\nI0519 23:52:56.919406 861 log.go:172] (0xc000b26e70) (0xc00031ac80) Stream added, broadcasting: 1\nI0519 23:52:56.923198 861 log.go:172] (0xc000b26e70) Reply frame received for 1\nI0519 23:52:56.923263 861 log.go:172] (0xc000b26e70) (0xc0005ec5a0) Create stream\nI0519 23:52:56.923294 861 log.go:172] (0xc000b26e70) (0xc0005ec5a0) Stream added, broadcasting: 3\nI0519 23:52:56.924620 861 log.go:172] (0xc000b26e70) Reply frame received for 3\nI0519 23:52:56.924678 861 log.go:172] (0xc000b26e70) (0xc000263ea0) Create stream\nI0519 23:52:56.924732 861 log.go:172] (0xc000b26e70) (0xc000263ea0) Stream added, broadcasting: 5\nI0519 23:52:56.926284 861 log.go:172] (0xc000b26e70) Reply frame received for 5\nI0519 23:52:57.018532 861 log.go:172] (0xc000b26e70) Data frame received for 3\nI0519 23:52:57.018564 861 log.go:172] (0xc0005ec5a0) (3) Data frame handling\nI0519 23:52:57.018575 861 log.go:172] (0xc0005ec5a0) (3) Data frame sent\nI0519 23:52:57.018584 861 log.go:172] (0xc000b26e70) Data frame received for 3\nI0519 23:52:57.018591 861 log.go:172] (0xc0005ec5a0) (3) Data frame handling\nI0519 23:52:57.018638 861 log.go:172] (0xc000b26e70) Data frame received for 5\nI0519 23:52:57.018682 861 log.go:172] (0xc000263ea0) (5) Data frame handling\nI0519 23:52:57.018708 861 log.go:172] (0xc000263ea0) (5) Data frame sent\nI0519 23:52:57.018732 861 log.go:172] (0xc000b26e70) Data frame received for 5\nI0519 23:52:57.018742 861 log.go:172] (0xc000263ea0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0519 23:52:57.020503 861 log.go:172] (0xc000b26e70) Data frame received for 1\nI0519 23:52:57.020542 861 log.go:172] (0xc00031ac80) (1) Data frame handling\nI0519 23:52:57.020565 861 log.go:172] (0xc00031ac80) (1) Data frame sent\nI0519 23:52:57.020592 861 log.go:172] (0xc000b26e70) (0xc00031ac80) Stream removed, broadcasting: 1\nI0519 23:52:57.020618 861 log.go:172] (0xc000b26e70) Go away received\nI0519 23:52:57.021067 861 log.go:172] (0xc000b26e70) (0xc00031ac80) Stream removed, broadcasting: 1\nI0519 23:52:57.021090 861 log.go:172] (0xc000b26e70) (0xc0005ec5a0) Stream removed, broadcasting: 3\nI0519 23:52:57.021101 861 log.go:172] (0xc000b26e70) (0xc000263ea0) Stream removed, broadcasting: 5\n" May 19 23:52:57.026: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 19 23:52:57.026: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 19 23:52:57.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7756 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 19 23:52:57.308: INFO: stderr: "I0519 23:52:57.168036 881 log.go:172] (0xc000a973f0) (0xc000b0a460) Create stream\nI0519 23:52:57.168097 881 log.go:172] (0xc000a973f0) (0xc000b0a460) Stream added, broadcasting: 1\nI0519 23:52:57.173076 881 log.go:172] (0xc000a973f0) Reply frame received for 1\nI0519 23:52:57.173281 881 log.go:172] (0xc000a973f0) (0xc000850640) Create stream\nI0519 23:52:57.173299 881 log.go:172] (0xc000a973f0) (0xc000850640) Stream added, broadcasting: 3\nI0519 23:52:57.174274 881 log.go:172] (0xc000a973f0) Reply frame received for 3\nI0519 23:52:57.174314 881 log.go:172] (0xc000a973f0) (0xc00050cdc0) Create stream\nI0519 23:52:57.174323 881 log.go:172] (0xc000a973f0) (0xc00050cdc0) Stream added, broadcasting: 5\nI0519 23:52:57.175131 881 log.go:172] (0xc000a973f0) Reply frame received for 5\nI0519 23:52:57.244991 881 log.go:172] (0xc000a973f0) Data frame received for 5\nI0519 23:52:57.245015 881 log.go:172] (0xc00050cdc0) (5) Data frame handling\nI0519 23:52:57.245029 881 log.go:172] (0xc00050cdc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0519 23:52:57.300150 881 log.go:172] (0xc000a973f0) Data frame received for 3\nI0519 23:52:57.300172 881 log.go:172] (0xc000850640) (3) Data frame handling\nI0519 23:52:57.300184 881 log.go:172] (0xc000850640) (3) Data frame sent\nI0519 23:52:57.300189 881 log.go:172] (0xc000a973f0) Data frame received for 3\nI0519 23:52:57.300195 881 log.go:172] (0xc000850640) (3) Data frame handling\nI0519 23:52:57.300230 881 log.go:172] (0xc000a973f0) Data frame received for 5\nI0519 23:52:57.300238 881 log.go:172] (0xc00050cdc0) (5) Data frame handling\nI0519 23:52:57.302594 881 log.go:172] (0xc000a973f0) Data frame received for 1\nI0519 23:52:57.302614 881 log.go:172] (0xc000b0a460) (1) Data frame handling\nI0519 23:52:57.302622 881 log.go:172] (0xc000b0a460) (1) Data frame sent\nI0519 23:52:57.302631 881 log.go:172] (0xc000a973f0) (0xc000b0a460) Stream removed, broadcasting: 1\nI0519 23:52:57.302900 881 log.go:172] (0xc000a973f0) (0xc000b0a460) Stream removed, broadcasting: 1\nI0519 23:52:57.302914 881 log.go:172] (0xc000a973f0) (0xc000850640) Stream removed, broadcasting: 3\nI0519 23:52:57.302938 881 log.go:172] (0xc000a973f0) Go away received\nI0519 23:52:57.303010 881 log.go:172] (0xc000a973f0) (0xc00050cdc0) Stream removed, broadcasting: 5\n" May 19 23:52:57.309: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 19 23:52:57.309: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 19 23:52:57.309: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7756 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 19 23:52:57.599: INFO: stderr: "I0519 23:52:57.488425 902 log.go:172] (0xc0009af810) (0xc0006caf00) Create stream\nI0519 23:52:57.488474 902 log.go:172] (0xc0009af810) (0xc0006caf00) Stream added, broadcasting: 1\nI0519 23:52:57.491762 902 log.go:172] (0xc0009af810) Reply frame received for 1\nI0519 23:52:57.491824 902 log.go:172] (0xc0009af810) (0xc0004b59a0) Create stream\nI0519 23:52:57.491855 902 log.go:172] (0xc0009af810) (0xc0004b59a0) Stream added, broadcasting: 3\nI0519 23:52:57.492956 902 log.go:172] (0xc0009af810) Reply frame received for 3\nI0519 23:52:57.493010 902 log.go:172] (0xc0009af810) (0xc000b580a0) Create stream\nI0519 23:52:57.493577 902 log.go:172] (0xc0009af810) (0xc000b580a0) Stream added, broadcasting: 5\nI0519 23:52:57.494617 902 log.go:172] (0xc0009af810) Reply frame received for 5\nI0519 23:52:57.546156 902 log.go:172] (0xc0009af810) Data frame received for 5\nI0519 23:52:57.546179 902 log.go:172] (0xc000b580a0) (5) Data frame handling\nI0519 23:52:57.546196 902 log.go:172] (0xc000b580a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0519 23:52:57.592628 902 log.go:172] (0xc0009af810) Data frame received for 3\nI0519 23:52:57.592675 902 log.go:172] (0xc0004b59a0) (3) Data frame handling\nI0519 23:52:57.592683 902 log.go:172] (0xc0004b59a0) (3) Data frame sent\nI0519 23:52:57.592688 902 log.go:172] (0xc0009af810) Data frame received for 3\nI0519 23:52:57.592692 902 log.go:172] (0xc0004b59a0) (3) Data frame handling\nI0519 23:52:57.592739 902 log.go:172] (0xc0009af810) Data frame received for 5\nI0519 23:52:57.592794 902 log.go:172] (0xc000b580a0) (5) Data frame handling\nI0519 23:52:57.594777 902 log.go:172] (0xc0009af810) Data frame received for 1\nI0519 23:52:57.594790 902 log.go:172] (0xc0006caf00) (1) Data frame handling\nI0519 23:52:57.594799 902 log.go:172] (0xc0006caf00) (1) Data frame sent\nI0519 23:52:57.594809 902 log.go:172] (0xc0009af810) (0xc0006caf00) Stream removed, broadcasting: 1\nI0519 23:52:57.594855 902 log.go:172] (0xc0009af810) Go away received\nI0519 23:52:57.595039 902 log.go:172] (0xc0009af810) (0xc0006caf00) Stream removed, broadcasting: 1\nI0519 23:52:57.595049 902 log.go:172] (0xc0009af810) (0xc0004b59a0) Stream removed, broadcasting: 3\nI0519 23:52:57.595054 902 log.go:172] (0xc0009af810) (0xc000b580a0) Stream removed, broadcasting: 5\n" May 19 23:52:57.599: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 19 23:52:57.599: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 19 23:52:57.599: INFO: Waiting for statefulset status.replicas updated to 0 May 19 23:52:57.603: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 19 23:53:07.610: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 19 23:53:07.610: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 19 23:53:07.610: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 19 23:53:07.622: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999752s May 19 23:53:08.628: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993723987s May 19 23:53:09.634: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988075968s May 19 23:53:10.638: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982485153s May 19 23:53:11.643: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.978046158s May 19 23:53:12.649: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.972631232s May 19 23:53:13.656: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.966730381s May 19 23:53:14.661: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.960460611s May 19 23:53:15.666: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.954759634s May 19 23:53:16.671: INFO: Verifying statefulset ss doesn't scale past 3 for another 950.149883ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7756 May 19 23:53:17.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7756 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 19 23:53:17.889: INFO: stderr: "I0519 23:53:17.812098 920 log.go:172] (0xc00094d3f0) (0xc000b4e500) Create stream\nI0519 23:53:17.812142 920 log.go:172] (0xc00094d3f0) (0xc000b4e500) Stream added, broadcasting: 1\nI0519 23:53:17.816518 920 log.go:172] (0xc00094d3f0) Reply frame received for 1\nI0519 23:53:17.816567 920 log.go:172] (0xc00094d3f0) (0xc0003fcdc0) Create stream\nI0519 23:53:17.816581 920 log.go:172] (0xc00094d3f0) (0xc0003fcdc0) Stream added, broadcasting: 3\nI0519 23:53:17.817718 920 log.go:172] (0xc00094d3f0) Reply frame received for 3\nI0519 23:53:17.817743 920 log.go:172] (0xc00094d3f0) (0xc0004ba000) Create stream\nI0519 23:53:17.817750 920 log.go:172] (0xc00094d3f0) (0xc0004ba000) Stream added, broadcasting: 5\nI0519 23:53:17.818632 920 log.go:172] (0xc00094d3f0) Reply frame received for 5\nI0519 23:53:17.882775 920 log.go:172] (0xc00094d3f0) Data frame received for 3\nI0519 23:53:17.882824 920 log.go:172] (0xc0003fcdc0) (3) Data frame handling\nI0519 23:53:17.882841 920 log.go:172] (0xc0003fcdc0) (3) Data frame sent\nI0519 23:53:17.882849 920 log.go:172] (0xc00094d3f0) Data frame received for 3\nI0519 23:53:17.882856 920 log.go:172] (0xc0003fcdc0) (3) Data frame handling\nI0519 23:53:17.882888 920 log.go:172] (0xc00094d3f0) Data frame received for 5\nI0519 23:53:17.882914 920 log.go:172] (0xc0004ba000) (5) Data frame handling\nI0519 23:53:17.882938 920 log.go:172] (0xc0004ba000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0519 23:53:17.883027 920 log.go:172] (0xc00094d3f0) Data frame received for 5\nI0519 23:53:17.883053 920 log.go:172] (0xc0004ba000) (5) Data frame handling\nI0519 23:53:17.884327 920 log.go:172] (0xc00094d3f0) Data frame received for 1\nI0519 23:53:17.884346 920 log.go:172] (0xc000b4e500) (1) Data frame handling\nI0519 23:53:17.884358 920 log.go:172] (0xc000b4e500) (1) Data frame sent\nI0519 23:53:17.884370 920 log.go:172] (0xc00094d3f0) (0xc000b4e500) Stream removed, broadcasting: 1\nI0519 23:53:17.884380 920 log.go:172] (0xc00094d3f0) Go away received\nI0519 23:53:17.884758 920 log.go:172] (0xc00094d3f0) (0xc000b4e500) Stream removed, broadcasting: 1\nI0519 23:53:17.884773 920 log.go:172] (0xc00094d3f0) (0xc0003fcdc0) Stream removed, broadcasting: 3\nI0519 23:53:17.884780 920 log.go:172] (0xc00094d3f0) (0xc0004ba000) Stream removed, broadcasting: 5\n" May 19 23:53:17.889: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 19 23:53:17.889: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 19 23:53:17.889: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7756 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 19 23:53:18.097: INFO: stderr: "I0519 23:53:18.014355 942 log.go:172] (0xc000b1e4d0) (0xc00069ec80) Create stream\nI0519 23:53:18.014418 942 log.go:172] (0xc000b1e4d0) (0xc00069ec80) Stream added, broadcasting: 1\nI0519 23:53:18.016418 942 log.go:172] (0xc000b1e4d0) Reply frame received for 1\nI0519 23:53:18.016474 942 log.go:172] (0xc000b1e4d0) (0xc0005725a0) Create stream\nI0519 23:53:18.016486 942 log.go:172] (0xc000b1e4d0) (0xc0005725a0) Stream added, broadcasting: 3\nI0519 23:53:18.017901 942 log.go:172] (0xc000b1e4d0) Reply frame received for 3\nI0519 23:53:18.017954 942 log.go:172] (0xc000b1e4d0) (0xc0004339a0) Create stream\nI0519 23:53:18.017976 942 log.go:172] (0xc000b1e4d0) (0xc0004339a0) Stream added, broadcasting: 5\nI0519 23:53:18.018877 942 log.go:172] (0xc000b1e4d0) Reply frame received for 5\nI0519 23:53:18.089546 942 log.go:172] (0xc000b1e4d0) Data frame received for 5\nI0519 23:53:18.089581 942 log.go:172] (0xc0004339a0) (5) Data frame handling\nI0519 23:53:18.089593 942 log.go:172] (0xc0004339a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0519 23:53:18.089621 942 log.go:172] (0xc000b1e4d0) Data frame received for 3\nI0519 23:53:18.089640 942 log.go:172] (0xc0005725a0) (3) Data frame handling\nI0519 23:53:18.089655 942 log.go:172] (0xc0005725a0) (3) Data frame sent\nI0519 23:53:18.089664 942 log.go:172] (0xc000b1e4d0) Data frame received for 3\nI0519 23:53:18.089675 942 log.go:172] (0xc0005725a0) (3) Data frame handling\nI0519 23:53:18.089701 942 log.go:172] (0xc000b1e4d0) Data frame received for 5\nI0519 23:53:18.089709 942 log.go:172] (0xc0004339a0) (5) Data frame handling\nI0519 23:53:18.091111 942 log.go:172] (0xc000b1e4d0) Data frame received for 1\nI0519 23:53:18.091335 942 log.go:172] (0xc00069ec80) (1) Data frame handling\nI0519 23:53:18.091399 942 log.go:172] (0xc00069ec80) (1) Data frame sent\nI0519 23:53:18.091448 942 log.go:172] (0xc000b1e4d0) (0xc00069ec80) Stream removed, broadcasting: 1\nI0519 23:53:18.091475 942 log.go:172] (0xc000b1e4d0) Go away received\nI0519 23:53:18.091839 942 log.go:172] (0xc000b1e4d0) (0xc00069ec80) Stream removed, broadcasting: 1\nI0519 23:53:18.091860 942 log.go:172] (0xc000b1e4d0) (0xc0005725a0) Stream removed, broadcasting: 3\nI0519 23:53:18.091870 942 log.go:172] (0xc000b1e4d0) (0xc0004339a0) Stream removed, broadcasting: 5\n" May 19 23:53:18.097: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 19 23:53:18.097: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 19 23:53:18.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7756 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 19 23:53:18.320: INFO: stderr: "I0519 23:53:18.241930 962 log.go:172] (0xc000ae3760) (0xc00068fcc0) Create stream\nI0519 23:53:18.242008 962 log.go:172] (0xc000ae3760) (0xc00068fcc0) Stream added, broadcasting: 1\nI0519 23:53:18.244980 962 log.go:172] (0xc000ae3760) Reply frame received for 1\nI0519 23:53:18.245066 962 log.go:172] (0xc000ae3760) (0xc0006ae6e0) Create stream\nI0519 23:53:18.245266 962 log.go:172] (0xc000ae3760) (0xc0006ae6e0) Stream added, broadcasting: 3\nI0519 23:53:18.246743 962 log.go:172] (0xc000ae3760) Reply frame received for 3\nI0519 23:53:18.246791 962 log.go:172] (0xc000ae3760) (0xc0006af040) Create stream\nI0519 23:53:18.246848 962 log.go:172] (0xc000ae3760) (0xc0006af040) Stream added, broadcasting: 5\nI0519 23:53:18.248376 962 log.go:172] (0xc000ae3760) Reply frame received for 5\nI0519 23:53:18.314218 962 log.go:172] (0xc000ae3760) Data frame received for 3\nI0519 23:53:18.314246 962 log.go:172] (0xc0006ae6e0) (3) Data frame handling\nI0519 23:53:18.314258 962 log.go:172] (0xc0006ae6e0) (3) Data frame sent\nI0519 23:53:18.314267 962 log.go:172] (0xc000ae3760) Data frame received for 3\nI0519 23:53:18.314283 962 log.go:172] (0xc000ae3760) Data frame received for 5\nI0519 23:53:18.314312 962 log.go:172] (0xc0006af040) (5) Data frame handling\nI0519 23:53:18.314328 962 log.go:172] (0xc0006af040) (5) Data frame sent\nI0519 23:53:18.314339 962 log.go:172] (0xc000ae3760) Data frame received for 5\nI0519 23:53:18.314350 962 log.go:172] (0xc0006af040) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0519 23:53:18.314379 962 log.go:172] (0xc0006ae6e0) (3) Data frame handling\nI0519 23:53:18.315364 962 log.go:172] (0xc000ae3760) Data frame received for 1\nI0519 23:53:18.315415 962 log.go:172] (0xc00068fcc0) (1) Data frame handling\nI0519 23:53:18.315442 962 log.go:172] (0xc00068fcc0) (1) Data frame sent\nI0519 23:53:18.315455 962 log.go:172] (0xc000ae3760) (0xc00068fcc0) Stream removed, broadcasting: 1\nI0519 23:53:18.315478 962 log.go:172] (0xc000ae3760) Go away received\nI0519 23:53:18.315781 962 log.go:172] (0xc000ae3760) (0xc00068fcc0) Stream removed, broadcasting: 1\nI0519 23:53:18.315795 962 log.go:172] (0xc000ae3760) (0xc0006ae6e0) Stream removed, broadcasting: 3\nI0519 23:53:18.315802 962 log.go:172] (0xc000ae3760) (0xc0006af040) Stream removed, broadcasting: 5\n" May 19 23:53:18.320: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 19 23:53:18.320: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 19 23:53:18.320: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 19 23:53:38.340: INFO: Deleting all statefulset in ns statefulset-7756 May 19 23:53:38.352: INFO: Scaling statefulset ss to 0 May 19 23:53:38.405: INFO: Waiting for statefulset status.replicas updated to 0 May 19 23:53:38.408: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:53:38.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7756" for this suite. • [SLOW TEST:82.442 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":288,"completed":57,"skipped":807,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:53:38.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 STEP: creating an pod May 19 23:53:38.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-1666 -- logs-generator --log-lines-total 100 --run-duration 20s' May 19 23:53:38.596: INFO: stderr: "" May 19 23:53:38.596: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. May 19 23:53:38.596: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 19 23:53:38.596: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1666" to be "running and ready, or succeeded" May 19 23:53:38.666: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 70.129005ms May 19 23:53:40.669: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073503737s May 19 23:53:42.673: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.07752207s May 19 23:53:42.673: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 19 23:53:42.673: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 19 23:53:42.673: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1666' May 19 23:53:42.810: INFO: stderr: "" May 19 23:53:42.810: INFO: stdout: "I0519 23:53:41.289955 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/n72 465\nI0519 23:53:41.476880 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/j26 494\nI0519 23:53:41.676923 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/qgn 524\nI0519 23:53:41.876908 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/w4h5 400\nI0519 23:53:42.076957 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/bsjl 345\nI0519 23:53:42.276929 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/fwb 366\nI0519 23:53:42.476893 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/zq8f 482\nI0519 23:53:42.676902 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/qh47 588\n" STEP: limiting log lines May 19 23:53:42.810: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1666 --tail=1' May 19 23:53:42.918: INFO: stderr: "" May 19 23:53:42.918: INFO: stdout: "I0519 23:53:42.876937 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/p8q 328\n" May 19 23:53:42.918: INFO: got output "I0519 23:53:42.876937 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/p8q 328\n" STEP: limiting log bytes May 19 23:53:42.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1666 --limit-bytes=1' May 19 23:53:43.040: INFO: stderr: "" May 19 23:53:43.040: INFO: stdout: "I" May 19 23:53:43.040: INFO: got output "I" STEP: exposing timestamps May 19 23:53:43.040: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1666 --tail=1 --timestamps' May 19 23:53:43.167: INFO: stderr: "" May 19 23:53:43.167: INFO: stdout: "2020-05-19T23:53:43.077076069Z I0519 23:53:43.076890 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/rmjq 335\n" May 19 23:53:43.167: INFO: got output "2020-05-19T23:53:43.077076069Z I0519 23:53:43.076890 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/rmjq 335\n" STEP: restricting to a time range May 19 23:53:45.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1666 --since=1s' May 19 23:53:45.783: INFO: stderr: "" May 19 23:53:45.783: INFO: stdout: "I0519 23:53:44.876897 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/8qd 408\nI0519 23:53:45.076842 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/r6r 529\nI0519 23:53:45.276887 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/js7 565\nI0519 23:53:45.476890 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/n7c8 421\nI0519 23:53:45.676887 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/fv6 419\n" May 19 23:53:45.783: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1666 --since=24h' May 19 23:53:45.892: INFO: stderr: "" May 19 23:53:45.892: INFO: stdout: "I0519 23:53:41.289955 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/n72 465\nI0519 23:53:41.476880 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/j26 494\nI0519 23:53:41.676923 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/qgn 524\nI0519 23:53:41.876908 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/w4h5 400\nI0519 23:53:42.076957 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/bsjl 345\nI0519 23:53:42.276929 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/fwb 366\nI0519 23:53:42.476893 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/zq8f 482\nI0519 23:53:42.676902 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/qh47 588\nI0519 23:53:42.876937 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/p8q 328\nI0519 23:53:43.076890 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/rmjq 335\nI0519 23:53:43.276902 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/tsp 552\nI0519 23:53:43.476941 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/c2z 311\nI0519 23:53:43.676892 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/889l 336\nI0519 23:53:43.876908 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/7v7p 297\nI0519 23:53:44.076899 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/tqj 432\nI0519 23:53:44.276939 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/r9d 269\nI0519 23:53:44.476918 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/m8j 249\nI0519 23:53:44.676884 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/4bcs 474\nI0519 23:53:44.876897 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/8qd 408\nI0519 23:53:45.076842 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/r6r 529\nI0519 23:53:45.276887 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/js7 565\nI0519 23:53:45.476890 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/n7c8 421\nI0519 23:53:45.676887 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/fv6 419\nI0519 23:53:45.876881 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/t6p 348\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 May 19 23:53:45.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-1666' May 19 23:53:55.229: INFO: stderr: "" May 19 23:53:55.229: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:53:55.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1666" for this suite. • [SLOW TEST:16.831 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":288,"completed":58,"skipped":813,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:53:55.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-cdd93c9d-23b5-40f6-8931-5aadf18dbe5b STEP: Creating a pod to test consume secrets May 19 23:53:55.349: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-21d23f18-675e-425b-bec3-9d0d1c2753f8" in namespace "projected-805" to be "Succeeded or Failed" May 19 23:53:55.396: INFO: Pod "pod-projected-secrets-21d23f18-675e-425b-bec3-9d0d1c2753f8": Phase="Pending", Reason="", readiness=false. Elapsed: 47.129857ms May 19 23:53:57.400: INFO: Pod "pod-projected-secrets-21d23f18-675e-425b-bec3-9d0d1c2753f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050449878s May 19 23:53:59.405: INFO: Pod "pod-projected-secrets-21d23f18-675e-425b-bec3-9d0d1c2753f8": Phase="Running", Reason="", readiness=true. Elapsed: 4.055232155s May 19 23:54:01.409: INFO: Pod "pod-projected-secrets-21d23f18-675e-425b-bec3-9d0d1c2753f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05979629s STEP: Saw pod success May 19 23:54:01.409: INFO: Pod "pod-projected-secrets-21d23f18-675e-425b-bec3-9d0d1c2753f8" satisfied condition "Succeeded or Failed" May 19 23:54:01.413: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-21d23f18-675e-425b-bec3-9d0d1c2753f8 container projected-secret-volume-test: STEP: delete the pod May 19 23:54:01.452: INFO: Waiting for pod pod-projected-secrets-21d23f18-675e-425b-bec3-9d0d1c2753f8 to disappear May 19 23:54:01.466: INFO: Pod pod-projected-secrets-21d23f18-675e-425b-bec3-9d0d1c2753f8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:54:01.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-805" for this suite. • [SLOW TEST:6.210 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":59,"skipped":816,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:54:01.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-eb3bcfdd-d2e2-4859-8e44-a12ea481becc STEP: Creating configMap with name cm-test-opt-upd-e694979f-b416-48e5-ad90-efe88c65885c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-eb3bcfdd-d2e2-4859-8e44-a12ea481becc STEP: Updating configmap cm-test-opt-upd-e694979f-b416-48e5-ad90-efe88c65885c STEP: Creating configMap with name cm-test-opt-create-7f4414d2-e277-40fc-b3a6-fb6afb46fa02 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:54:09.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6418" for this suite. • [SLOW TEST:8.249 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":60,"skipped":820,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:54:09.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-c001e2dc-7e18-4078-a8e4-88441587b7ba STEP: Creating a pod to test consume configMaps May 19 23:54:09.804: INFO: Waiting up to 5m0s for pod "pod-configmaps-306b2ef0-cebd-46fd-988c-5beed59370bc" in namespace "configmap-7352" to be "Succeeded or Failed" May 19 23:54:09.840: INFO: Pod "pod-configmaps-306b2ef0-cebd-46fd-988c-5beed59370bc": Phase="Pending", Reason="", readiness=false. Elapsed: 35.628727ms May 19 23:54:11.844: INFO: Pod "pod-configmaps-306b2ef0-cebd-46fd-988c-5beed59370bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039883231s May 19 23:54:13.848: INFO: Pod "pod-configmaps-306b2ef0-cebd-46fd-988c-5beed59370bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043981858s STEP: Saw pod success May 19 23:54:13.848: INFO: Pod "pod-configmaps-306b2ef0-cebd-46fd-988c-5beed59370bc" satisfied condition "Succeeded or Failed" May 19 23:54:13.850: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-306b2ef0-cebd-46fd-988c-5beed59370bc container configmap-volume-test: STEP: delete the pod May 19 23:54:13.979: INFO: Waiting for pod pod-configmaps-306b2ef0-cebd-46fd-988c-5beed59370bc to disappear May 19 23:54:14.025: INFO: Pod pod-configmaps-306b2ef0-cebd-46fd-988c-5beed59370bc no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:54:14.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7352" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":61,"skipped":863,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:54:14.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 23:54:14.178: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"04b5f356-3a9f-4b61-bc65-8b5d20020908", Controller:(*bool)(0xc004a6d732), BlockOwnerDeletion:(*bool)(0xc004a6d733)}} May 19 23:54:14.206: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"149c1d6f-267b-476c-b7cd-69f9d65e9059", Controller:(*bool)(0xc00517499a), BlockOwnerDeletion:(*bool)(0xc00517499b)}} May 19 23:54:14.260: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"34f98aa3-411f-48a6-944f-b6f272487acd", Controller:(*bool)(0xc004a6d9aa), BlockOwnerDeletion:(*bool)(0xc004a6d9ab)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:54:19.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3282" for this suite. • [SLOW TEST:5.273 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":288,"completed":62,"skipped":896,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:54:19.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 19 23:54:27.452: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 23:54:27.485: INFO: Pod pod-with-poststart-http-hook still exists May 19 23:54:29.485: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 23:54:29.490: INFO: Pod pod-with-poststart-http-hook still exists May 19 23:54:31.485: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 23:54:31.490: INFO: Pod pod-with-poststart-http-hook still exists May 19 23:54:33.485: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 23:54:33.488: INFO: Pod pod-with-poststart-http-hook still exists May 19 23:54:35.485: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 23:54:35.489: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:54:35.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-955" for this suite. • [SLOW TEST:16.175 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":288,"completed":63,"skipped":946,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:54:35.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-7177a431-9310-4970-b20f-1c40252a58d0 STEP: Creating a pod to test consume secrets May 19 23:54:35.859: INFO: Waiting up to 5m0s for pod "pod-secrets-f33b946b-831e-40be-8a81-f09b0bbb24db" in namespace "secrets-8982" to be "Succeeded or Failed" May 19 23:54:35.872: INFO: Pod "pod-secrets-f33b946b-831e-40be-8a81-f09b0bbb24db": Phase="Pending", Reason="", readiness=false. Elapsed: 13.241732ms May 19 23:54:37.874: INFO: Pod "pod-secrets-f33b946b-831e-40be-8a81-f09b0bbb24db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015749225s May 19 23:54:39.878: INFO: Pod "pod-secrets-f33b946b-831e-40be-8a81-f09b0bbb24db": Phase="Running", Reason="", readiness=true. Elapsed: 4.019079809s May 19 23:54:41.882: INFO: Pod "pod-secrets-f33b946b-831e-40be-8a81-f09b0bbb24db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023001393s STEP: Saw pod success May 19 23:54:41.882: INFO: Pod "pod-secrets-f33b946b-831e-40be-8a81-f09b0bbb24db" satisfied condition "Succeeded or Failed" May 19 23:54:41.885: INFO: Trying to get logs from node latest-worker pod pod-secrets-f33b946b-831e-40be-8a81-f09b0bbb24db container secret-volume-test: STEP: delete the pod May 19 23:54:41.915: INFO: Waiting for pod pod-secrets-f33b946b-831e-40be-8a81-f09b0bbb24db to disappear May 19 23:54:41.947: INFO: Pod pod-secrets-f33b946b-831e-40be-8a81-f09b0bbb24db no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:54:41.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8982" for this suite. • [SLOW TEST:6.459 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":64,"skipped":952,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:54:41.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all May 19 23:54:42.017: INFO: Waiting up to 5m0s for pod "client-containers-a444b202-7b24-4c38-ba80-bfe45f5c2771" in namespace "containers-3492" to be "Succeeded or Failed" May 19 23:54:42.020: INFO: Pod "client-containers-a444b202-7b24-4c38-ba80-bfe45f5c2771": Phase="Pending", Reason="", readiness=false. Elapsed: 2.407386ms May 19 23:54:44.024: INFO: Pod "client-containers-a444b202-7b24-4c38-ba80-bfe45f5c2771": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006869147s May 19 23:54:46.027: INFO: Pod "client-containers-a444b202-7b24-4c38-ba80-bfe45f5c2771": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009679097s STEP: Saw pod success May 19 23:54:46.027: INFO: Pod "client-containers-a444b202-7b24-4c38-ba80-bfe45f5c2771" satisfied condition "Succeeded or Failed" May 19 23:54:46.029: INFO: Trying to get logs from node latest-worker pod client-containers-a444b202-7b24-4c38-ba80-bfe45f5c2771 container test-container: STEP: delete the pod May 19 23:54:46.060: INFO: Waiting for pod client-containers-a444b202-7b24-4c38-ba80-bfe45f5c2771 to disappear May 19 23:54:46.068: INFO: Pod client-containers-a444b202-7b24-4c38-ba80-bfe45f5c2771 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:54:46.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3492" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":288,"completed":65,"skipped":983,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:54:46.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 23:54:46.401: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' May 19 23:54:46.585: INFO: stderr: "" May 19 23:54:46.585: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.3.35+3416442e4b7eeb\", GitCommit:\"3416442e4b7eebfce360f5b7468c6818d3e882f8\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:24Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:54:46.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-588" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":288,"completed":66,"skipped":1017,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:54:46.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 19 23:54:47.504: INFO: Pod name wrapped-volume-race-c2d46ef6-36a2-46db-a526-ee80adc9c0d1: Found 0 pods out of 5 May 19 23:54:52.511: INFO: Pod name wrapped-volume-race-c2d46ef6-36a2-46db-a526-ee80adc9c0d1: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c2d46ef6-36a2-46db-a526-ee80adc9c0d1 in namespace emptydir-wrapper-9270, will wait for the garbage collector to delete the pods May 19 23:55:06.611: INFO: Deleting ReplicationController wrapped-volume-race-c2d46ef6-36a2-46db-a526-ee80adc9c0d1 took: 8.017747ms May 19 23:55:06.911: INFO: Terminating ReplicationController wrapped-volume-race-c2d46ef6-36a2-46db-a526-ee80adc9c0d1 pods took: 300.236859ms STEP: Creating RC which spawns configmap-volume pods May 19 23:55:24.981: INFO: Pod name wrapped-volume-race-1db524b1-89b6-4a70-a4a9-db77979feed7: Found 0 pods out of 5 May 19 23:55:29.992: INFO: Pod name wrapped-volume-race-1db524b1-89b6-4a70-a4a9-db77979feed7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1db524b1-89b6-4a70-a4a9-db77979feed7 in namespace emptydir-wrapper-9270, will wait for the garbage collector to delete the pods May 19 23:55:46.122: INFO: Deleting ReplicationController wrapped-volume-race-1db524b1-89b6-4a70-a4a9-db77979feed7 took: 7.654461ms May 19 23:55:46.522: INFO: Terminating ReplicationController wrapped-volume-race-1db524b1-89b6-4a70-a4a9-db77979feed7 pods took: 400.2818ms STEP: Creating RC which spawns configmap-volume pods May 19 23:55:55.606: INFO: Pod name wrapped-volume-race-d05211e6-0d1a-46ce-9584-269ec9d470d6: Found 0 pods out of 5 May 19 23:56:00.623: INFO: Pod name wrapped-volume-race-d05211e6-0d1a-46ce-9584-269ec9d470d6: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d05211e6-0d1a-46ce-9584-269ec9d470d6 in namespace emptydir-wrapper-9270, will wait for the garbage collector to delete the pods May 19 23:56:14.750: INFO: Deleting ReplicationController wrapped-volume-race-d05211e6-0d1a-46ce-9584-269ec9d470d6 took: 50.082845ms May 19 23:56:15.050: INFO: Terminating ReplicationController wrapped-volume-race-d05211e6-0d1a-46ce-9584-269ec9d470d6 pods took: 300.273378ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:56:25.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9270" for this suite. • [SLOW TEST:99.258 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":288,"completed":67,"skipped":1048,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:56:25.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 23:56:26.488: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 23:56:28.501: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529386, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529386, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529386, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529386, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 23:56:31.530: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 23:56:31.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-966-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:56:32.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2291" for this suite. STEP: Destroying namespace "webhook-2291-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.098 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":288,"completed":68,"skipped":1059,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:56:32.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 23:56:33.132: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bce402f0-ce32-4ac5-be6f-d6077c8b860d" in namespace "projected-6160" to be "Succeeded or Failed" May 19 23:56:33.159: INFO: Pod "downwardapi-volume-bce402f0-ce32-4ac5-be6f-d6077c8b860d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.440681ms May 19 23:56:35.368: INFO: Pod "downwardapi-volume-bce402f0-ce32-4ac5-be6f-d6077c8b860d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236392336s May 19 23:56:37.372: INFO: Pod "downwardapi-volume-bce402f0-ce32-4ac5-be6f-d6077c8b860d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.240151964s STEP: Saw pod success May 19 23:56:37.372: INFO: Pod "downwardapi-volume-bce402f0-ce32-4ac5-be6f-d6077c8b860d" satisfied condition "Succeeded or Failed" May 19 23:56:37.375: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-bce402f0-ce32-4ac5-be6f-d6077c8b860d container client-container: STEP: delete the pod May 19 23:56:37.436: INFO: Waiting for pod downwardapi-volume-bce402f0-ce32-4ac5-be6f-d6077c8b860d to disappear May 19 23:56:37.447: INFO: Pod downwardapi-volume-bce402f0-ce32-4ac5-be6f-d6077c8b860d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:56:37.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6160" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":69,"skipped":1073,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:56:37.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 19 23:56:42.153: INFO: Successfully updated pod "annotationupdatef75113ab-d5e2-4e51-ab87-a6c728538b96" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:56:44.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3422" for this suite. • [SLOW TEST:6.705 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":70,"skipped":1094,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:56:44.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 23:56:44.314: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab530016-799f-48fb-99bd-21f5bc26a64d" in namespace "downward-api-902" to be "Succeeded or Failed" May 19 23:56:44.317: INFO: Pod "downwardapi-volume-ab530016-799f-48fb-99bd-21f5bc26a64d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.89342ms May 19 23:56:46.422: INFO: Pod "downwardapi-volume-ab530016-799f-48fb-99bd-21f5bc26a64d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108009918s May 19 23:56:48.458: INFO: Pod "downwardapi-volume-ab530016-799f-48fb-99bd-21f5bc26a64d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.143645328s STEP: Saw pod success May 19 23:56:48.458: INFO: Pod "downwardapi-volume-ab530016-799f-48fb-99bd-21f5bc26a64d" satisfied condition "Succeeded or Failed" May 19 23:56:48.461: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-ab530016-799f-48fb-99bd-21f5bc26a64d container client-container: STEP: delete the pod May 19 23:56:48.657: INFO: Waiting for pod downwardapi-volume-ab530016-799f-48fb-99bd-21f5bc26a64d to disappear May 19 23:56:48.750: INFO: Pod downwardapi-volume-ab530016-799f-48fb-99bd-21f5bc26a64d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:56:48.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-902" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":71,"skipped":1103,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:56:48.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 19 23:56:48.906: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 19 23:56:48.910: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 19 23:56:48.910: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 19 23:56:48.915: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 19 23:56:48.915: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 19 23:56:48.993: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 19 23:56:48.994: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 19 23:56:56.457: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:56:56.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-7590" for this suite. • [SLOW TEST:7.695 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":288,"completed":72,"skipped":1125,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:56:56.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-30b3da21-ff86-4354-9776-8eadec3624d3 STEP: Creating a pod to test consume configMaps May 19 23:56:56.671: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9fe82bc3-1ea5-4e03-825d-3c7f72eaa6ca" in namespace "projected-2245" to be "Succeeded or Failed" May 19 23:56:56.687: INFO: Pod "pod-projected-configmaps-9fe82bc3-1ea5-4e03-825d-3c7f72eaa6ca": Phase="Pending", Reason="", readiness=false. Elapsed: 15.967466ms May 19 23:56:58.883: INFO: Pod "pod-projected-configmaps-9fe82bc3-1ea5-4e03-825d-3c7f72eaa6ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211691323s May 19 23:57:00.887: INFO: Pod "pod-projected-configmaps-9fe82bc3-1ea5-4e03-825d-3c7f72eaa6ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.216107801s May 19 23:57:02.959: INFO: Pod "pod-projected-configmaps-9fe82bc3-1ea5-4e03-825d-3c7f72eaa6ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.287639254s STEP: Saw pod success May 19 23:57:02.959: INFO: Pod "pod-projected-configmaps-9fe82bc3-1ea5-4e03-825d-3c7f72eaa6ca" satisfied condition "Succeeded or Failed" May 19 23:57:02.962: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-9fe82bc3-1ea5-4e03-825d-3c7f72eaa6ca container projected-configmap-volume-test: STEP: delete the pod May 19 23:57:03.093: INFO: Waiting for pod pod-projected-configmaps-9fe82bc3-1ea5-4e03-825d-3c7f72eaa6ca to disappear May 19 23:57:03.101: INFO: Pod pod-projected-configmaps-9fe82bc3-1ea5-4e03-825d-3c7f72eaa6ca no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:57:03.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2245" for this suite. • [SLOW TEST:6.711 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":73,"skipped":1140,"failed":0} SS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:57:03.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:57:03.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-4482" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":288,"completed":74,"skipped":1142,"failed":0} SS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:57:03.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 23:57:08.294: INFO: Waiting up to 5m0s for pod "client-envvars-fd82ad6e-5142-40b8-803c-22b472024e0c" in namespace "pods-2629" to be "Succeeded or Failed" May 19 23:57:08.308: INFO: Pod "client-envvars-fd82ad6e-5142-40b8-803c-22b472024e0c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.23282ms May 19 23:57:10.312: INFO: Pod "client-envvars-fd82ad6e-5142-40b8-803c-22b472024e0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017867861s May 19 23:57:12.316: INFO: Pod "client-envvars-fd82ad6e-5142-40b8-803c-22b472024e0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022083537s STEP: Saw pod success May 19 23:57:12.316: INFO: Pod "client-envvars-fd82ad6e-5142-40b8-803c-22b472024e0c" satisfied condition "Succeeded or Failed" May 19 23:57:12.319: INFO: Trying to get logs from node latest-worker2 pod client-envvars-fd82ad6e-5142-40b8-803c-22b472024e0c container env3cont: STEP: delete the pod May 19 23:57:12.491: INFO: Waiting for pod client-envvars-fd82ad6e-5142-40b8-803c-22b472024e0c to disappear May 19 23:57:12.533: INFO: Pod client-envvars-fd82ad6e-5142-40b8-803c-22b472024e0c no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:57:12.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2629" for this suite. • [SLOW TEST:8.710 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":288,"completed":75,"skipped":1144,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:57:12.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:57:19.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-655" for this suite. • [SLOW TEST:7.426 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":288,"completed":76,"skipped":1158,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:57:19.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 19 23:57:20.102: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 19 23:57:29.787: INFO: >>> kubeConfig: /root/.kube/config May 19 23:57:32.733: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:57:43.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-818" for this suite. • [SLOW TEST:23.519 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":288,"completed":77,"skipped":1169,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:57:43.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 19 23:57:50.097: INFO: Successfully updated pod "adopt-release-kg68h" STEP: Checking that the Job readopts the Pod May 19 23:57:50.097: INFO: Waiting up to 15m0s for pod "adopt-release-kg68h" in namespace "job-6949" to be "adopted" May 19 23:57:50.159: INFO: Pod "adopt-release-kg68h": Phase="Running", Reason="", readiness=true. Elapsed: 61.416838ms May 19 23:57:52.163: INFO: Pod "adopt-release-kg68h": Phase="Running", Reason="", readiness=true. Elapsed: 2.06576206s May 19 23:57:52.163: INFO: Pod "adopt-release-kg68h" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 19 23:57:52.673: INFO: Successfully updated pod "adopt-release-kg68h" STEP: Checking that the Job releases the Pod May 19 23:57:52.673: INFO: Waiting up to 15m0s for pod "adopt-release-kg68h" in namespace "job-6949" to be "released" May 19 23:57:52.725: INFO: Pod "adopt-release-kg68h": Phase="Running", Reason="", readiness=true. Elapsed: 51.635576ms May 19 23:57:54.755: INFO: Pod "adopt-release-kg68h": Phase="Running", Reason="", readiness=true. Elapsed: 2.081803775s May 19 23:57:54.755: INFO: Pod "adopt-release-kg68h" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:57:54.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6949" for this suite. • [SLOW TEST:11.273 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":288,"completed":78,"skipped":1179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:57:54.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-00976e7c-94f3-4d12-9f5d-fab2e866e82d in namespace container-probe-4913 May 19 23:57:59.214: INFO: Started pod busybox-00976e7c-94f3-4d12-9f5d-fab2e866e82d in namespace container-probe-4913 STEP: checking the pod's current state and verifying that restartCount is present May 19 23:57:59.216: INFO: Initial restart count of pod busybox-00976e7c-94f3-4d12-9f5d-fab2e866e82d is 0 May 19 23:58:47.351: INFO: Restart count of pod container-probe-4913/busybox-00976e7c-94f3-4d12-9f5d-fab2e866e82d is now 1 (48.134776674s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:58:47.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4913" for this suite. • [SLOW TEST:52.635 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":79,"skipped":1246,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:58:47.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-5c43b9c4-5cfa-428b-ae87-2355e782abbe STEP: Creating a pod to test consume configMaps May 19 23:58:47.519: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d0ff56af-2078-4dd4-bf59-528b41ec69fb" in namespace "projected-8363" to be "Succeeded or Failed" May 19 23:58:47.523: INFO: Pod "pod-projected-configmaps-d0ff56af-2078-4dd4-bf59-528b41ec69fb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.68187ms May 19 23:58:49.527: INFO: Pod "pod-projected-configmaps-d0ff56af-2078-4dd4-bf59-528b41ec69fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007443975s May 19 23:58:51.531: INFO: Pod "pod-projected-configmaps-d0ff56af-2078-4dd4-bf59-528b41ec69fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011966717s STEP: Saw pod success May 19 23:58:51.531: INFO: Pod "pod-projected-configmaps-d0ff56af-2078-4dd4-bf59-528b41ec69fb" satisfied condition "Succeeded or Failed" May 19 23:58:51.535: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-d0ff56af-2078-4dd4-bf59-528b41ec69fb container projected-configmap-volume-test: STEP: delete the pod May 19 23:58:51.604: INFO: Waiting for pod pod-projected-configmaps-d0ff56af-2078-4dd4-bf59-528b41ec69fb to disappear May 19 23:58:51.612: INFO: Pod pod-projected-configmaps-d0ff56af-2078-4dd4-bf59-528b41ec69fb no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:58:51.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8363" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":80,"skipped":1251,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:58:51.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8615 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-8615 May 19 23:58:51.978: INFO: Found 0 stateful pods, waiting for 1 May 19 23:59:01.983: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 19 23:59:02.015: INFO: Deleting all statefulset in ns statefulset-8615 May 19 23:59:02.027: INFO: Scaling statefulset ss to 0 May 19 23:59:22.126: INFO: Waiting for statefulset status.replicas updated to 0 May 19 23:59:22.130: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:59:22.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8615" for this suite. • [SLOW TEST:30.526 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":288,"completed":81,"skipped":1262,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:59:22.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 19 23:59:22.264: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 19 23:59:22.275: INFO: Waiting for terminating namespaces to be deleted... May 19 23:59:22.277: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 19 23:59:22.283: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 19 23:59:22.283: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 19 23:59:22.283: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 19 23:59:22.283: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 19 23:59:22.283: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 19 23:59:22.283: INFO: Container kindnet-cni ready: true, restart count 0 May 19 23:59:22.283: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 19 23:59:22.283: INFO: Container kube-proxy ready: true, restart count 0 May 19 23:59:22.283: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 19 23:59:22.288: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 19 23:59:22.288: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 19 23:59:22.288: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 19 23:59:22.288: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 19 23:59:22.288: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 19 23:59:22.288: INFO: Container kindnet-cni ready: true, restart count 0 May 19 23:59:22.288: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 19 23:59:22.288: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c1fd9849-b8e6-44d8-aa25-0975de4c8dfa 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-c1fd9849-b8e6-44d8-aa25-0975de4c8dfa off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-c1fd9849-b8e6-44d8-aa25-0975de4c8dfa [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:59:30.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7175" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.248 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":288,"completed":82,"skipped":1270,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:59:30.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-83bfe273-e544-4c6e-986c-e6ecbab06026 in namespace container-probe-1242 May 19 23:59:34.528: INFO: Started pod liveness-83bfe273-e544-4c6e-986c-e6ecbab06026 in namespace container-probe-1242 STEP: checking the pod's current state and verifying that restartCount is present May 19 23:59:34.531: INFO: Initial restart count of pod liveness-83bfe273-e544-4c6e-986c-e6ecbab06026 is 0 May 19 23:59:58.583: INFO: Restart count of pod container-probe-1242/liveness-83bfe273-e544-4c6e-986c-e6ecbab06026 is now 1 (24.051901208s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 23:59:58.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1242" for this suite. • [SLOW TEST:28.234 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":83,"skipped":1279,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 23:59:58.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod May 20 00:00:03.073: INFO: Pod pod-hostip-d73f3421-57c6-4118-b552-57e32a465602 has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:00:03.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8649" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":288,"completed":84,"skipped":1326,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:00:03.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-8479 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8479 to expose endpoints map[] May 20 00:00:03.214: INFO: Get endpoints failed (19.634025ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 20 00:00:04.218: INFO: successfully validated that service endpoint-test2 in namespace services-8479 exposes endpoints map[] (1.02345132s elapsed) STEP: Creating pod pod1 in namespace services-8479 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8479 to expose endpoints map[pod1:[80]] May 20 00:00:08.308: INFO: successfully validated that service endpoint-test2 in namespace services-8479 exposes endpoints map[pod1:[80]] (4.083168771s elapsed) STEP: Creating pod pod2 in namespace services-8479 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8479 to expose endpoints map[pod1:[80] pod2:[80]] May 20 00:00:12.556: INFO: successfully validated that service endpoint-test2 in namespace services-8479 exposes endpoints map[pod1:[80] pod2:[80]] (4.244508932s elapsed) STEP: Deleting pod pod1 in namespace services-8479 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8479 to expose endpoints map[pod2:[80]] May 20 00:00:13.639: INFO: successfully validated that service endpoint-test2 in namespace services-8479 exposes endpoints map[pod2:[80]] (1.077774735s elapsed) STEP: Deleting pod pod2 in namespace services-8479 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8479 to expose endpoints map[] May 20 00:00:14.693: INFO: successfully validated that service endpoint-test2 in namespace services-8479 exposes endpoints map[] (1.049223317s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:00:14.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8479" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:11.671 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":288,"completed":85,"skipped":1348,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:00:14.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 20 00:00:14.834: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7455 /api/v1/namespaces/watch-7455/configmaps/e2e-watch-test-resource-version 8257d5e1-14b2-4aa1-aa8d-089baabd3298 6080701 0 2020-05-20 00:00:14 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-20 00:00:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 20 00:00:14.834: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7455 /api/v1/namespaces/watch-7455/configmaps/e2e-watch-test-resource-version 8257d5e1-14b2-4aa1-aa8d-089baabd3298 6080702 0 2020-05-20 00:00:14 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-20 00:00:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:00:14.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7455" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":288,"completed":86,"skipped":1357,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:00:14.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server May 20 00:00:14.928: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:00:15.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7659" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":288,"completed":87,"skipped":1384,"failed":0} SSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:00:15.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-43b3819b-60b5-472f-9800-c2221c4ef8be [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:00:15.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6847" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":288,"completed":88,"skipped":1389,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:00:15.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:00:15.177: INFO: Create a RollingUpdate DaemonSet May 20 00:00:15.181: INFO: Check that daemon pods launch on every node of the cluster May 20 00:00:15.185: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:00:15.205: INFO: Number of nodes with available pods: 0 May 20 00:00:15.205: INFO: Node latest-worker is running more than one daemon pod May 20 00:00:16.211: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:00:16.215: INFO: Number of nodes with available pods: 0 May 20 00:00:16.215: INFO: Node latest-worker is running more than one daemon pod May 20 00:00:17.226: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:00:17.229: INFO: Number of nodes with available pods: 0 May 20 00:00:17.229: INFO: Node latest-worker is running more than one daemon pod May 20 00:00:18.211: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:00:18.215: INFO: Number of nodes with available pods: 0 May 20 00:00:18.215: INFO: Node latest-worker is running more than one daemon pod May 20 00:00:19.211: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:00:19.215: INFO: Number of nodes with available pods: 1 May 20 00:00:19.215: INFO: Node latest-worker2 is running more than one daemon pod May 20 00:00:20.209: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:00:20.212: INFO: Number of nodes with available pods: 2 May 20 00:00:20.212: INFO: Number of running nodes: 2, number of available pods: 2 May 20 00:00:20.212: INFO: Update the DaemonSet to trigger a rollout May 20 00:00:20.226: INFO: Updating DaemonSet daemon-set May 20 00:00:35.348: INFO: Roll back the DaemonSet before rollout is complete May 20 00:00:35.369: INFO: Updating DaemonSet daemon-set May 20 00:00:35.369: INFO: Make sure DaemonSet rollback is complete May 20 00:00:35.383: INFO: Wrong image for pod: daemon-set-229m2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 20 00:00:35.383: INFO: Pod daemon-set-229m2 is not available May 20 00:00:35.425: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:00:36.430: INFO: Wrong image for pod: daemon-set-229m2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 20 00:00:36.430: INFO: Pod daemon-set-229m2 is not available May 20 00:00:36.435: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:00:37.430: INFO: Wrong image for pod: daemon-set-229m2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 20 00:00:37.430: INFO: Pod daemon-set-229m2 is not available May 20 00:00:37.434: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:00:38.430: INFO: Wrong image for pod: daemon-set-229m2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 20 00:00:38.430: INFO: Pod daemon-set-229m2 is not available May 20 00:00:38.434: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:00:39.429: INFO: Wrong image for pod: daemon-set-229m2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 20 00:00:39.429: INFO: Pod daemon-set-229m2 is not available May 20 00:00:39.433: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:00:40.431: INFO: Wrong image for pod: daemon-set-229m2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 20 00:00:40.431: INFO: Pod daemon-set-229m2 is not available May 20 00:00:40.436: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:00:41.431: INFO: Pod daemon-set-gtsbq is not available May 20 00:00:41.439: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5891, will wait for the garbage collector to delete the pods May 20 00:00:41.505: INFO: Deleting DaemonSet.extensions daemon-set took: 6.861307ms May 20 00:00:41.805: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.253108ms May 20 00:00:55.309: INFO: Number of nodes with available pods: 0 May 20 00:00:55.309: INFO: Number of running nodes: 0, number of available pods: 0 May 20 00:00:55.346: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5891/daemonsets","resourceVersion":"6080945"},"items":null} May 20 00:00:55.349: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5891/pods","resourceVersion":"6080945"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:00:55.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5891" for this suite. • [SLOW TEST:40.279 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":288,"completed":89,"skipped":1393,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:00:55.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-9b6aa4e1-904b-4a85-97d5-c67f36fe5e22 STEP: Creating a pod to test consume secrets May 20 00:00:55.477: INFO: Waiting up to 5m0s for pod "pod-secrets-65d0ee90-f1ff-4454-a8fc-0bb4235ac72d" in namespace "secrets-7849" to be "Succeeded or Failed" May 20 00:00:55.496: INFO: Pod "pod-secrets-65d0ee90-f1ff-4454-a8fc-0bb4235ac72d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.869891ms May 20 00:00:57.663: INFO: Pod "pod-secrets-65d0ee90-f1ff-4454-a8fc-0bb4235ac72d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186018326s May 20 00:00:59.668: INFO: Pod "pod-secrets-65d0ee90-f1ff-4454-a8fc-0bb4235ac72d": Phase="Running", Reason="", readiness=true. Elapsed: 4.190351225s May 20 00:01:01.672: INFO: Pod "pod-secrets-65d0ee90-f1ff-4454-a8fc-0bb4235ac72d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.19429088s STEP: Saw pod success May 20 00:01:01.672: INFO: Pod "pod-secrets-65d0ee90-f1ff-4454-a8fc-0bb4235ac72d" satisfied condition "Succeeded or Failed" May 20 00:01:01.674: INFO: Trying to get logs from node latest-worker pod pod-secrets-65d0ee90-f1ff-4454-a8fc-0bb4235ac72d container secret-volume-test: STEP: delete the pod May 20 00:01:01.720: INFO: Waiting for pod pod-secrets-65d0ee90-f1ff-4454-a8fc-0bb4235ac72d to disappear May 20 00:01:01.776: INFO: Pod pod-secrets-65d0ee90-f1ff-4454-a8fc-0bb4235ac72d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:01:01.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7849" for this suite. • [SLOW TEST:6.417 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":90,"skipped":1401,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:01:01.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:01:01.860: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 20 00:01:04.858: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1262 create -f -' May 20 00:01:08.200: INFO: stderr: "" May 20 00:01:08.200: INFO: stdout: "e2e-test-crd-publish-openapi-7633-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 20 00:01:08.200: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1262 delete e2e-test-crd-publish-openapi-7633-crds test-foo' May 20 00:01:08.317: INFO: stderr: "" May 20 00:01:08.317: INFO: stdout: "e2e-test-crd-publish-openapi-7633-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 20 00:01:08.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1262 apply -f -' May 20 00:01:08.574: INFO: stderr: "" May 20 00:01:08.574: INFO: stdout: "e2e-test-crd-publish-openapi-7633-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 20 00:01:08.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1262 delete e2e-test-crd-publish-openapi-7633-crds test-foo' May 20 00:01:08.700: INFO: stderr: "" May 20 00:01:08.700: INFO: stdout: "e2e-test-crd-publish-openapi-7633-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 20 00:01:08.700: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1262 create -f -' May 20 00:01:09.008: INFO: rc: 1 May 20 00:01:09.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1262 apply -f -' May 20 00:01:09.250: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 20 00:01:09.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1262 create -f -' May 20 00:01:09.478: INFO: rc: 1 May 20 00:01:09.478: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1262 apply -f -' May 20 00:01:09.711: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 20 00:01:09.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7633-crds' May 20 00:01:09.980: INFO: stderr: "" May 20 00:01:09.980: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7633-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 20 00:01:09.980: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7633-crds.metadata' May 20 00:01:10.254: INFO: stderr: "" May 20 00:01:10.254: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7633-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 20 00:01:10.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7633-crds.spec' May 20 00:01:10.476: INFO: stderr: "" May 20 00:01:10.477: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7633-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 20 00:01:10.477: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7633-crds.spec.bars' May 20 00:01:10.720: INFO: stderr: "" May 20 00:01:10.720: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7633-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 20 00:01:10.720: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7633-crds.spec.bars2' May 20 00:01:10.949: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:01:12.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1262" for this suite. • [SLOW TEST:11.134 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":288,"completed":91,"skipped":1411,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:01:12.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 00:01:13.705: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 00:01:15.715: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529673, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529673, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529673, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529673, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 00:01:17.732: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529673, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529673, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529673, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529673, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 00:01:20.760: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:01:20.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2704" for this suite. STEP: Destroying namespace "webhook-2704-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.127 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":288,"completed":92,"skipped":1414,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:01:21.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-cf461750-61cf-4d46-8a52-a34f3d9adce9 STEP: Creating a pod to test consume configMaps May 20 00:01:21.147: INFO: Waiting up to 5m0s for pod "pod-configmaps-5b00450a-fe51-426b-bf48-7e79a8f8a30c" in namespace "configmap-4132" to be "Succeeded or Failed" May 20 00:01:21.174: INFO: Pod "pod-configmaps-5b00450a-fe51-426b-bf48-7e79a8f8a30c": Phase="Pending", Reason="", readiness=false. Elapsed: 27.374204ms May 20 00:01:23.179: INFO: Pod "pod-configmaps-5b00450a-fe51-426b-bf48-7e79a8f8a30c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031804285s May 20 00:01:25.183: INFO: Pod "pod-configmaps-5b00450a-fe51-426b-bf48-7e79a8f8a30c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036393594s STEP: Saw pod success May 20 00:01:25.183: INFO: Pod "pod-configmaps-5b00450a-fe51-426b-bf48-7e79a8f8a30c" satisfied condition "Succeeded or Failed" May 20 00:01:25.187: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-5b00450a-fe51-426b-bf48-7e79a8f8a30c container configmap-volume-test: STEP: delete the pod May 20 00:01:25.239: INFO: Waiting for pod pod-configmaps-5b00450a-fe51-426b-bf48-7e79a8f8a30c to disappear May 20 00:01:25.320: INFO: Pod pod-configmaps-5b00450a-fe51-426b-bf48-7e79a8f8a30c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:01:25.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4132" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":93,"skipped":1441,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:01:25.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 20 00:01:25.387: INFO: Waiting up to 5m0s for pod "pod-244c6a89-1257-4585-91e2-edee3d47b3aa" in namespace "emptydir-3034" to be "Succeeded or Failed" May 20 00:01:25.405: INFO: Pod "pod-244c6a89-1257-4585-91e2-edee3d47b3aa": Phase="Pending", Reason="", readiness=false. Elapsed: 17.898137ms May 20 00:01:27.408: INFO: Pod "pod-244c6a89-1257-4585-91e2-edee3d47b3aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020879456s May 20 00:01:29.413: INFO: Pod "pod-244c6a89-1257-4585-91e2-edee3d47b3aa": Phase="Running", Reason="", readiness=true. Elapsed: 4.025517439s May 20 00:01:31.417: INFO: Pod "pod-244c6a89-1257-4585-91e2-edee3d47b3aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030113309s STEP: Saw pod success May 20 00:01:31.418: INFO: Pod "pod-244c6a89-1257-4585-91e2-edee3d47b3aa" satisfied condition "Succeeded or Failed" May 20 00:01:31.421: INFO: Trying to get logs from node latest-worker pod pod-244c6a89-1257-4585-91e2-edee3d47b3aa container test-container: STEP: delete the pod May 20 00:01:31.464: INFO: Waiting for pod pod-244c6a89-1257-4585-91e2-edee3d47b3aa to disappear May 20 00:01:31.474: INFO: Pod pod-244c6a89-1257-4585-91e2-edee3d47b3aa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:01:31.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3034" for this suite. • [SLOW TEST:6.149 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":94,"skipped":1466,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:01:31.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 20 00:01:31.580: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51fa5471-a7c4-416e-8473-79f6de2d9125" in namespace "downward-api-1111" to be "Succeeded or Failed" May 20 00:01:31.595: INFO: Pod "downwardapi-volume-51fa5471-a7c4-416e-8473-79f6de2d9125": Phase="Pending", Reason="", readiness=false. Elapsed: 15.188307ms May 20 00:01:33.599: INFO: Pod "downwardapi-volume-51fa5471-a7c4-416e-8473-79f6de2d9125": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019304598s May 20 00:01:35.609: INFO: Pod "downwardapi-volume-51fa5471-a7c4-416e-8473-79f6de2d9125": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028956645s STEP: Saw pod success May 20 00:01:35.609: INFO: Pod "downwardapi-volume-51fa5471-a7c4-416e-8473-79f6de2d9125" satisfied condition "Succeeded or Failed" May 20 00:01:35.611: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-51fa5471-a7c4-416e-8473-79f6de2d9125 container client-container: STEP: delete the pod May 20 00:01:35.626: INFO: Waiting for pod downwardapi-volume-51fa5471-a7c4-416e-8473-79f6de2d9125 to disappear May 20 00:01:35.643: INFO: Pod downwardapi-volume-51fa5471-a7c4-416e-8473-79f6de2d9125 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:01:35.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1111" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":95,"skipped":1468,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:01:35.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 20 00:01:40.272: INFO: Successfully updated pod "annotationupdate2bec85f2-1c3f-4160-b6f7-4a6dc62b86f0" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:01:42.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-532" for this suite. • [SLOW TEST:6.676 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":96,"skipped":1476,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:01:42.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-25f9d828-4e40-4021-8989-4245a0781eee STEP: Creating a pod to test consume secrets May 20 00:01:42.420: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5cdb396b-3135-4970-9822-7f90babed443" in namespace "projected-1575" to be "Succeeded or Failed" May 20 00:01:42.442: INFO: Pod "pod-projected-secrets-5cdb396b-3135-4970-9822-7f90babed443": Phase="Pending", Reason="", readiness=false. Elapsed: 21.858304ms May 20 00:01:44.519: INFO: Pod "pod-projected-secrets-5cdb396b-3135-4970-9822-7f90babed443": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099015359s May 20 00:01:46.523: INFO: Pod "pod-projected-secrets-5cdb396b-3135-4970-9822-7f90babed443": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103000804s STEP: Saw pod success May 20 00:01:46.524: INFO: Pod "pod-projected-secrets-5cdb396b-3135-4970-9822-7f90babed443" satisfied condition "Succeeded or Failed" May 20 00:01:46.526: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-5cdb396b-3135-4970-9822-7f90babed443 container secret-volume-test: STEP: delete the pod May 20 00:01:46.591: INFO: Waiting for pod pod-projected-secrets-5cdb396b-3135-4970-9822-7f90babed443 to disappear May 20 00:01:46.611: INFO: Pod pod-projected-secrets-5cdb396b-3135-4970-9822-7f90babed443 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:01:46.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1575" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":97,"skipped":1487,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:01:46.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 00:01:47.128: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 00:01:49.151: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529707, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529707, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529707, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529707, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 00:01:51.155: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529707, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529707, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529707, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529707, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 00:01:54.180: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:01:54.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3869" for this suite. STEP: Destroying namespace "webhook-3869-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.835 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":288,"completed":98,"skipped":1515,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:01:54.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 00:01:55.549: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 00:01:57.579: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529715, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529715, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529715, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529715, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 00:01:59.582: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529715, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529715, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529715, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725529715, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 00:02:02.646: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:02:12.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2734" for this suite. STEP: Destroying namespace "webhook-2734-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.604 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":288,"completed":99,"skipped":1517,"failed":0} SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:02:13.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 00:02:13.146: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 00:02:13.165: INFO: Waiting for terminating namespaces to be deleted... May 20 00:02:13.168: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 20 00:02:13.172: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 20 00:02:13.172: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 20 00:02:13.172: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 20 00:02:13.172: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 20 00:02:13.172: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 20 00:02:13.172: INFO: Container kindnet-cni ready: true, restart count 0 May 20 00:02:13.172: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 20 00:02:13.172: INFO: Container kube-proxy ready: true, restart count 0 May 20 00:02:13.172: INFO: sample-webhook-deployment-75dd644756-m46tl from webhook-2734 started at 2020-05-20 00:01:55 +0000 UTC (1 container statuses recorded) May 20 00:02:13.172: INFO: Container sample-webhook ready: true, restart count 0 May 20 00:02:13.172: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 20 00:02:13.177: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 20 00:02:13.177: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 20 00:02:13.177: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 20 00:02:13.177: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 20 00:02:13.177: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 20 00:02:13.177: INFO: Container kindnet-cni ready: true, restart count 0 May 20 00:02:13.177: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 20 00:02:13.177: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-256b7776-4099-40bf-90a7-e2646e2ac9d9 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-256b7776-4099-40bf-90a7-e2646e2ac9d9 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-256b7776-4099-40bf-90a7-e2646e2ac9d9 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:07:21.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9201" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.347 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":288,"completed":100,"skipped":1527,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:07:21.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 00:07:21.479: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 00:07:21.492: INFO: Waiting for terminating namespaces to be deleted... May 20 00:07:21.495: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 20 00:07:21.525: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 20 00:07:21.525: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 20 00:07:21.525: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 20 00:07:21.525: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 20 00:07:21.525: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 20 00:07:21.525: INFO: Container kindnet-cni ready: true, restart count 0 May 20 00:07:21.525: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 20 00:07:21.525: INFO: Container kube-proxy ready: true, restart count 0 May 20 00:07:21.525: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 20 00:07:21.531: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 20 00:07:21.531: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 20 00:07:21.531: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 20 00:07:21.531: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 20 00:07:21.531: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 20 00:07:21.531: INFO: Container kindnet-cni ready: true, restart count 0 May 20 00:07:21.531: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 20 00:07:21.531: INFO: Container kube-proxy ready: true, restart count 0 May 20 00:07:21.531: INFO: pod4 from sched-pred-9201 started at 2020-05-20 00:02:17 +0000 UTC (1 container statuses recorded) May 20 00:07:21.531: INFO: Container pod4 ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.161093dd6647e8bc], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.161093dd678f8001], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:07:28.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8606" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.173 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":288,"completed":101,"skipped":1531,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:07:28.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4770 STEP: creating a selector STEP: Creating the service pods in kubernetes May 20 00:07:28.629: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 20 00:07:28.693: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 00:07:30.698: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 00:07:32.698: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:07:34.698: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:07:36.698: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:07:38.698: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:07:40.698: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:07:42.697: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:07:44.706: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:07:46.703: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:07:48.709: INFO: The status of Pod netserver-0 is Running (Ready = true) May 20 00:07:48.715: INFO: The status of Pod netserver-1 is Running (Ready = false) May 20 00:07:50.720: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 20 00:07:54.776: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.121 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4770 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 00:07:54.776: INFO: >>> kubeConfig: /root/.kube/config I0520 00:07:54.812353 7 log.go:172] (0xc000eba2c0) (0xc001ca1ae0) Create stream I0520 00:07:54.812384 7 log.go:172] (0xc000eba2c0) (0xc001ca1ae0) Stream added, broadcasting: 1 I0520 00:07:54.814170 7 log.go:172] (0xc000eba2c0) Reply frame received for 1 I0520 00:07:54.814208 7 log.go:172] (0xc000eba2c0) (0xc001ca1cc0) Create stream I0520 00:07:54.814221 7 log.go:172] (0xc000eba2c0) (0xc001ca1cc0) Stream added, broadcasting: 3 I0520 00:07:54.814974 7 log.go:172] (0xc000eba2c0) Reply frame received for 3 I0520 00:07:54.815002 7 log.go:172] (0xc000eba2c0) (0xc001ca1ea0) Create stream I0520 00:07:54.815014 7 log.go:172] (0xc000eba2c0) (0xc001ca1ea0) Stream added, broadcasting: 5 I0520 00:07:54.815958 7 log.go:172] (0xc000eba2c0) Reply frame received for 5 I0520 00:07:55.992445 7 log.go:172] (0xc000eba2c0) Data frame received for 3 I0520 00:07:55.992487 7 log.go:172] (0xc001ca1cc0) (3) Data frame handling I0520 00:07:55.992506 7 log.go:172] (0xc001ca1cc0) (3) Data frame sent I0520 00:07:55.992812 7 log.go:172] (0xc000eba2c0) Data frame received for 5 I0520 00:07:55.992855 7 log.go:172] (0xc001ca1ea0) (5) Data frame handling I0520 00:07:55.992898 7 log.go:172] (0xc000eba2c0) Data frame received for 3 I0520 00:07:55.992931 7 log.go:172] (0xc001ca1cc0) (3) Data frame handling I0520 00:07:55.995320 7 log.go:172] (0xc000eba2c0) Data frame received for 1 I0520 00:07:55.995353 7 log.go:172] (0xc001ca1ae0) (1) Data frame handling I0520 00:07:55.995376 7 log.go:172] (0xc001ca1ae0) (1) Data frame sent I0520 00:07:55.995566 7 log.go:172] (0xc000eba2c0) (0xc001ca1ae0) Stream removed, broadcasting: 1 I0520 00:07:55.995683 7 log.go:172] (0xc000eba2c0) (0xc001ca1ae0) Stream removed, broadcasting: 1 I0520 00:07:55.995754 7 log.go:172] (0xc000eba2c0) (0xc001ca1cc0) Stream removed, broadcasting: 3 I0520 00:07:55.995817 7 log.go:172] (0xc000eba2c0) (0xc001ca1ea0) Stream removed, broadcasting: 5 May 20 00:07:55.995: INFO: Found all expected endpoints: [netserver-0] I0520 00:07:55.995938 7 log.go:172] (0xc000eba2c0) Go away received May 20 00:07:56.000: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.127 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4770 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 00:07:56.000: INFO: >>> kubeConfig: /root/.kube/config I0520 00:07:56.028216 7 log.go:172] (0xc00203c420) (0xc0021a4960) Create stream I0520 00:07:56.028243 7 log.go:172] (0xc00203c420) (0xc0021a4960) Stream added, broadcasting: 1 I0520 00:07:56.030306 7 log.go:172] (0xc00203c420) Reply frame received for 1 I0520 00:07:56.030338 7 log.go:172] (0xc00203c420) (0xc001912820) Create stream I0520 00:07:56.030350 7 log.go:172] (0xc00203c420) (0xc001912820) Stream added, broadcasting: 3 I0520 00:07:56.031545 7 log.go:172] (0xc00203c420) Reply frame received for 3 I0520 00:07:56.031589 7 log.go:172] (0xc00203c420) (0xc0026634a0) Create stream I0520 00:07:56.031605 7 log.go:172] (0xc00203c420) (0xc0026634a0) Stream added, broadcasting: 5 I0520 00:07:56.033412 7 log.go:172] (0xc00203c420) Reply frame received for 5 I0520 00:07:57.144881 7 log.go:172] (0xc00203c420) Data frame received for 3 I0520 00:07:57.144912 7 log.go:172] (0xc001912820) (3) Data frame handling I0520 00:07:57.144941 7 log.go:172] (0xc001912820) (3) Data frame sent I0520 00:07:57.144964 7 log.go:172] (0xc00203c420) Data frame received for 3 I0520 00:07:57.144985 7 log.go:172] (0xc001912820) (3) Data frame handling I0520 00:07:57.145024 7 log.go:172] (0xc00203c420) Data frame received for 5 I0520 00:07:57.145087 7 log.go:172] (0xc0026634a0) (5) Data frame handling I0520 00:07:57.147692 7 log.go:172] (0xc00203c420) Data frame received for 1 I0520 00:07:57.147719 7 log.go:172] (0xc0021a4960) (1) Data frame handling I0520 00:07:57.147746 7 log.go:172] (0xc0021a4960) (1) Data frame sent I0520 00:07:57.147769 7 log.go:172] (0xc00203c420) (0xc0021a4960) Stream removed, broadcasting: 1 I0520 00:07:57.147794 7 log.go:172] (0xc00203c420) Go away received I0520 00:07:57.147936 7 log.go:172] (0xc00203c420) (0xc0021a4960) Stream removed, broadcasting: 1 I0520 00:07:57.147968 7 log.go:172] (0xc00203c420) (0xc001912820) Stream removed, broadcasting: 3 I0520 00:07:57.147993 7 log.go:172] (0xc00203c420) (0xc0026634a0) Stream removed, broadcasting: 5 May 20 00:07:57.148: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:07:57.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4770" for this suite. • [SLOW TEST:28.602 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":102,"skipped":1541,"failed":0} SSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:07:57.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:07:57.231: INFO: Creating deployment "test-recreate-deployment" May 20 00:07:57.239: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 20 00:07:57.268: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 20 00:07:59.277: INFO: Waiting deployment "test-recreate-deployment" to complete May 20 00:07:59.279: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725530077, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725530077, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725530077, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725530077, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 00:08:01.284: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 20 00:08:01.293: INFO: Updating deployment test-recreate-deployment May 20 00:08:01.293: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 20 00:08:02.010: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-4755 /apis/apps/v1/namespaces/deployment-4755/deployments/test-recreate-deployment 2d071243-5f08-45d3-bd4e-6f79d3ebca13 6082742 2 2020-05-20 00:07:57 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-20 00:08:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-20 00:08:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00535d5f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-20 00:08:01 +0000 UTC,LastTransitionTime:2020-05-20 00:08:01 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-20 00:08:01 +0000 UTC,LastTransitionTime:2020-05-20 00:07:57 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 20 00:08:02.014: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-4755 /apis/apps/v1/namespaces/deployment-4755/replicasets/test-recreate-deployment-d5667d9c7 917ce404-182c-4fdb-b52a-b3b4b9b9e027 6082740 1 2020-05-20 00:08:01 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 2d071243-5f08-45d3-bd4e-6f79d3ebca13 0xc002c2f1e0 0xc002c2f1e1}] [] [{kube-controller-manager Update apps/v1 2020-05-20 00:08:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d071243-5f08-45d3-bd4e-6f79d3ebca13\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c2f258 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 20 00:08:02.014: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 20 00:08:02.014: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-4755 /apis/apps/v1/namespaces/deployment-4755/replicasets/test-recreate-deployment-6d65b9f6d8 0740c5b9-1775-4655-bced-eea7d9111fcc 6082731 2 2020-05-20 00:07:57 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 2d071243-5f08-45d3-bd4e-6f79d3ebca13 0xc002c2f0e7 0xc002c2f0e8}] [] [{kube-controller-manager Update apps/v1 2020-05-20 00:08:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d071243-5f08-45d3-bd4e-6f79d3ebca13\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c2f178 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 20 00:08:02.028: INFO: Pod "test-recreate-deployment-d5667d9c7-mr5xc" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-mr5xc test-recreate-deployment-d5667d9c7- deployment-4755 /api/v1/namespaces/deployment-4755/pods/test-recreate-deployment-d5667d9c7-mr5xc 7d764d29-fa84-40c9-a9cc-1916fdb049ca 6082745 0 2020-05-20 00:08:01 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 917ce404-182c-4fdb-b52a-b3b4b9b9e027 0xc00528b9d0 0xc00528b9d1}] [] [{kube-controller-manager Update v1 2020-05-20 00:08:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"917ce404-182c-4fdb-b52a-b3b4b9b9e027\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:08:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tcj9d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tcj9d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tcj9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:08:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:08:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:08:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:08:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-20 00:08:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:08:02.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4755" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":103,"skipped":1544,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:08:02.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-e8db90ae-54df-41ac-8531-1212c8a0e491 in namespace container-probe-1817 May 20 00:08:08.155: INFO: Started pod test-webserver-e8db90ae-54df-41ac-8531-1212c8a0e491 in namespace container-probe-1817 STEP: checking the pod's current state and verifying that restartCount is present May 20 00:08:08.157: INFO: Initial restart count of pod test-webserver-e8db90ae-54df-41ac-8531-1212c8a0e491 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:12:08.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1817" for this suite. • [SLOW TEST:246.795 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":104,"skipped":1551,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:12:08.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 20 00:12:08.886: INFO: Waiting up to 5m0s for pod "downward-api-01fbaca7-c684-4eef-947a-5dab853295c8" in namespace "downward-api-1087" to be "Succeeded or Failed" May 20 00:12:08.902: INFO: Pod "downward-api-01fbaca7-c684-4eef-947a-5dab853295c8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.518225ms May 20 00:12:10.933: INFO: Pod "downward-api-01fbaca7-c684-4eef-947a-5dab853295c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047362159s May 20 00:12:12.938: INFO: Pod "downward-api-01fbaca7-c684-4eef-947a-5dab853295c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052132115s STEP: Saw pod success May 20 00:12:12.938: INFO: Pod "downward-api-01fbaca7-c684-4eef-947a-5dab853295c8" satisfied condition "Succeeded or Failed" May 20 00:12:12.941: INFO: Trying to get logs from node latest-worker pod downward-api-01fbaca7-c684-4eef-947a-5dab853295c8 container dapi-container: STEP: delete the pod May 20 00:12:12.993: INFO: Waiting for pod downward-api-01fbaca7-c684-4eef-947a-5dab853295c8 to disappear May 20 00:12:12.997: INFO: Pod downward-api-01fbaca7-c684-4eef-947a-5dab853295c8 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:12:12.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1087" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":288,"completed":105,"skipped":1618,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:12:13.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 20 00:12:13.107: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0afc4608-3f8f-44cc-83d8-e5b1c7b9011f" in namespace "projected-84" to be "Succeeded or Failed" May 20 00:12:13.111: INFO: Pod "downwardapi-volume-0afc4608-3f8f-44cc-83d8-e5b1c7b9011f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.864554ms May 20 00:12:15.115: INFO: Pod "downwardapi-volume-0afc4608-3f8f-44cc-83d8-e5b1c7b9011f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007861095s May 20 00:12:17.124: INFO: Pod "downwardapi-volume-0afc4608-3f8f-44cc-83d8-e5b1c7b9011f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016835684s STEP: Saw pod success May 20 00:12:17.124: INFO: Pod "downwardapi-volume-0afc4608-3f8f-44cc-83d8-e5b1c7b9011f" satisfied condition "Succeeded or Failed" May 20 00:12:17.127: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-0afc4608-3f8f-44cc-83d8-e5b1c7b9011f container client-container: STEP: delete the pod May 20 00:12:17.160: INFO: Waiting for pod downwardapi-volume-0afc4608-3f8f-44cc-83d8-e5b1c7b9011f to disappear May 20 00:12:17.165: INFO: Pod downwardapi-volume-0afc4608-3f8f-44cc-83d8-e5b1c7b9011f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:12:17.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-84" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":106,"skipped":1628,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:12:17.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-384b7e93-85f4-4711-a56c-7040443e92fa STEP: Creating a pod to test consume configMaps May 20 00:12:17.530: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cab37c63-465d-416b-a11c-614b16faedd9" in namespace "projected-9303" to be "Succeeded or Failed" May 20 00:12:17.534: INFO: Pod "pod-projected-configmaps-cab37c63-465d-416b-a11c-614b16faedd9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.71669ms May 20 00:12:19.538: INFO: Pod "pod-projected-configmaps-cab37c63-465d-416b-a11c-614b16faedd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007800237s May 20 00:12:21.543: INFO: Pod "pod-projected-configmaps-cab37c63-465d-416b-a11c-614b16faedd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012401976s STEP: Saw pod success May 20 00:12:21.543: INFO: Pod "pod-projected-configmaps-cab37c63-465d-416b-a11c-614b16faedd9" satisfied condition "Succeeded or Failed" May 20 00:12:21.546: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-cab37c63-465d-416b-a11c-614b16faedd9 container projected-configmap-volume-test: STEP: delete the pod May 20 00:12:21.589: INFO: Waiting for pod pod-projected-configmaps-cab37c63-465d-416b-a11c-614b16faedd9 to disappear May 20 00:12:21.606: INFO: Pod pod-projected-configmaps-cab37c63-465d-416b-a11c-614b16faedd9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:12:21.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9303" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":107,"skipped":1664,"failed":0} ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:12:21.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0520 00:12:31.740724 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 20 00:12:31.740: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:12:31.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8571" for this suite. • [SLOW TEST:10.134 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":288,"completed":108,"skipped":1664,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:12:31.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:12:35.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9449" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":288,"completed":109,"skipped":1690,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:12:35.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7524 STEP: creating a selector STEP: Creating the service pods in kubernetes May 20 00:12:35.958: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 20 00:12:36.056: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 00:12:38.081: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 00:12:40.065: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:12:42.059: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:12:44.060: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:12:46.060: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:12:48.059: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:12:50.059: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:12:52.060: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:12:54.061: INFO: The status of Pod netserver-0 is Running (Ready = true) May 20 00:12:54.068: INFO: The status of Pod netserver-1 is Running (Ready = false) May 20 00:12:56.073: INFO: The status of Pod netserver-1 is Running (Ready = false) May 20 00:12:58.072: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 20 00:13:02.157: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.135:8080/dial?request=hostname&protocol=udp&host=10.244.1.126&port=8081&tries=1'] Namespace:pod-network-test-7524 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 00:13:02.157: INFO: >>> kubeConfig: /root/.kube/config I0520 00:13:02.198468 7 log.go:172] (0xc000eba8f0) (0xc0021a4960) Create stream I0520 00:13:02.198520 7 log.go:172] (0xc000eba8f0) (0xc0021a4960) Stream added, broadcasting: 1 I0520 00:13:02.200681 7 log.go:172] (0xc000eba8f0) Reply frame received for 1 I0520 00:13:02.200721 7 log.go:172] (0xc000eba8f0) (0xc0021a4a00) Create stream I0520 00:13:02.200736 7 log.go:172] (0xc000eba8f0) (0xc0021a4a00) Stream added, broadcasting: 3 I0520 00:13:02.201851 7 log.go:172] (0xc000eba8f0) Reply frame received for 3 I0520 00:13:02.201884 7 log.go:172] (0xc000eba8f0) (0xc001ca1220) Create stream I0520 00:13:02.201900 7 log.go:172] (0xc000eba8f0) (0xc001ca1220) Stream added, broadcasting: 5 I0520 00:13:02.202949 7 log.go:172] (0xc000eba8f0) Reply frame received for 5 I0520 00:13:02.319209 7 log.go:172] (0xc000eba8f0) Data frame received for 3 I0520 00:13:02.319251 7 log.go:172] (0xc0021a4a00) (3) Data frame handling I0520 00:13:02.319283 7 log.go:172] (0xc0021a4a00) (3) Data frame sent I0520 00:13:02.320271 7 log.go:172] (0xc000eba8f0) Data frame received for 3 I0520 00:13:02.320316 7 log.go:172] (0xc0021a4a00) (3) Data frame handling I0520 00:13:02.320398 7 log.go:172] (0xc000eba8f0) Data frame received for 5 I0520 00:13:02.320428 7 log.go:172] (0xc001ca1220) (5) Data frame handling I0520 00:13:02.322881 7 log.go:172] (0xc000eba8f0) Data frame received for 1 I0520 00:13:02.322938 7 log.go:172] (0xc0021a4960) (1) Data frame handling I0520 00:13:02.322962 7 log.go:172] (0xc0021a4960) (1) Data frame sent I0520 00:13:02.322978 7 log.go:172] (0xc000eba8f0) (0xc0021a4960) Stream removed, broadcasting: 1 I0520 00:13:02.323002 7 log.go:172] (0xc000eba8f0) Go away received I0520 00:13:02.323331 7 log.go:172] (0xc000eba8f0) (0xc0021a4960) Stream removed, broadcasting: 1 I0520 00:13:02.323358 7 log.go:172] (0xc000eba8f0) (0xc0021a4a00) Stream removed, broadcasting: 3 I0520 00:13:02.323373 7 log.go:172] (0xc000eba8f0) (0xc001ca1220) Stream removed, broadcasting: 5 May 20 00:13:02.323: INFO: Waiting for responses: map[] May 20 00:13:02.326: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.135:8080/dial?request=hostname&protocol=udp&host=10.244.2.134&port=8081&tries=1'] Namespace:pod-network-test-7524 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 00:13:02.326: INFO: >>> kubeConfig: /root/.kube/config I0520 00:13:02.356501 7 log.go:172] (0xc00203cb00) (0xc00201db80) Create stream I0520 00:13:02.356531 7 log.go:172] (0xc00203cb00) (0xc00201db80) Stream added, broadcasting: 1 I0520 00:13:02.359023 7 log.go:172] (0xc00203cb00) Reply frame received for 1 I0520 00:13:02.359059 7 log.go:172] (0xc00203cb00) (0xc0021a4aa0) Create stream I0520 00:13:02.359071 7 log.go:172] (0xc00203cb00) (0xc0021a4aa0) Stream added, broadcasting: 3 I0520 00:13:02.359992 7 log.go:172] (0xc00203cb00) Reply frame received for 3 I0520 00:13:02.360017 7 log.go:172] (0xc00203cb00) (0xc0021a4b40) Create stream I0520 00:13:02.360027 7 log.go:172] (0xc00203cb00) (0xc0021a4b40) Stream added, broadcasting: 5 I0520 00:13:02.360966 7 log.go:172] (0xc00203cb00) Reply frame received for 5 I0520 00:13:02.432200 7 log.go:172] (0xc00203cb00) Data frame received for 3 I0520 00:13:02.432233 7 log.go:172] (0xc0021a4aa0) (3) Data frame handling I0520 00:13:02.432263 7 log.go:172] (0xc0021a4aa0) (3) Data frame sent I0520 00:13:02.432659 7 log.go:172] (0xc00203cb00) Data frame received for 5 I0520 00:13:02.432682 7 log.go:172] (0xc0021a4b40) (5) Data frame handling I0520 00:13:02.432715 7 log.go:172] (0xc00203cb00) Data frame received for 3 I0520 00:13:02.432758 7 log.go:172] (0xc0021a4aa0) (3) Data frame handling I0520 00:13:02.434551 7 log.go:172] (0xc00203cb00) Data frame received for 1 I0520 00:13:02.434599 7 log.go:172] (0xc00201db80) (1) Data frame handling I0520 00:13:02.434649 7 log.go:172] (0xc00201db80) (1) Data frame sent I0520 00:13:02.434690 7 log.go:172] (0xc00203cb00) (0xc00201db80) Stream removed, broadcasting: 1 I0520 00:13:02.434720 7 log.go:172] (0xc00203cb00) Go away received I0520 00:13:02.434889 7 log.go:172] (0xc00203cb00) (0xc00201db80) Stream removed, broadcasting: 1 I0520 00:13:02.434922 7 log.go:172] (0xc00203cb00) (0xc0021a4aa0) Stream removed, broadcasting: 3 I0520 00:13:02.434951 7 log.go:172] (0xc00203cb00) (0xc0021a4b40) Stream removed, broadcasting: 5 May 20 00:13:02.435: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:13:02.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7524" for this suite. • [SLOW TEST:26.534 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":288,"completed":110,"skipped":1692,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:13:02.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 00:13:02.571: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 00:13:02.591: INFO: Waiting for terminating namespaces to be deleted... May 20 00:13:02.595: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 20 00:13:02.600: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 20 00:13:02.600: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 20 00:13:02.600: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 20 00:13:02.600: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 20 00:13:02.600: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 20 00:13:02.600: INFO: Container kindnet-cni ready: true, restart count 0 May 20 00:13:02.600: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 20 00:13:02.600: INFO: Container kube-proxy ready: true, restart count 0 May 20 00:13:02.600: INFO: netserver-0 from pod-network-test-7524 started at 2020-05-20 00:12:36 +0000 UTC (1 container statuses recorded) May 20 00:13:02.600: INFO: Container webserver ready: true, restart count 0 May 20 00:13:02.600: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 20 00:13:02.606: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 20 00:13:02.607: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 20 00:13:02.607: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 20 00:13:02.607: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 20 00:13:02.607: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 20 00:13:02.607: INFO: Container kindnet-cni ready: true, restart count 0 May 20 00:13:02.607: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 20 00:13:02.607: INFO: Container kube-proxy ready: true, restart count 0 May 20 00:13:02.607: INFO: netserver-1 from pod-network-test-7524 started at 2020-05-20 00:12:36 +0000 UTC (1 container statuses recorded) May 20 00:13:02.607: INFO: Container webserver ready: true, restart count 0 May 20 00:13:02.607: INFO: test-container-pod from pod-network-test-7524 started at 2020-05-20 00:12:58 +0000 UTC (1 container statuses recorded) May 20 00:13:02.607: INFO: Container webserver ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 May 20 00:13:02.715: INFO: Pod rally-c184502e-30nwopzm requesting resource cpu=0m on Node latest-worker May 20 00:13:02.715: INFO: Pod terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 requesting resource cpu=0m on Node latest-worker2 May 20 00:13:02.715: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker May 20 00:13:02.715: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 May 20 00:13:02.715: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker May 20 00:13:02.715: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 May 20 00:13:02.715: INFO: Pod netserver-0 requesting resource cpu=0m on Node latest-worker May 20 00:13:02.715: INFO: Pod netserver-1 requesting resource cpu=0m on Node latest-worker2 May 20 00:13:02.715: INFO: Pod test-container-pod requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 20 00:13:02.715: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker May 20 00:13:02.721: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-69d98e28-2023-4d12-9588-13da0b263d61.1610942b70382ee7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7960/filler-pod-69d98e28-2023-4d12-9588-13da0b263d61 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-69d98e28-2023-4d12-9588-13da0b263d61.1610942bc0394107], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-69d98e28-2023-4d12-9588-13da0b263d61.1610942c144f179c], Reason = [Created], Message = [Created container filler-pod-69d98e28-2023-4d12-9588-13da0b263d61] STEP: Considering event: Type = [Normal], Name = [filler-pod-69d98e28-2023-4d12-9588-13da0b263d61.1610942c2f23c2a5], Reason = [Started], Message = [Started container filler-pod-69d98e28-2023-4d12-9588-13da0b263d61] STEP: Considering event: Type = [Normal], Name = [filler-pod-a0941e31-b222-46f0-b881-dd7f832dc480.1610942b71806169], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7960/filler-pod-a0941e31-b222-46f0-b881-dd7f832dc480 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-a0941e31-b222-46f0-b881-dd7f832dc480.1610942c0ed0abaf], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a0941e31-b222-46f0-b881-dd7f832dc480.1610942c4cb38cdb], Reason = [Created], Message = [Created container filler-pod-a0941e31-b222-46f0-b881-dd7f832dc480] STEP: Considering event: Type = [Normal], Name = [filler-pod-a0941e31-b222-46f0-b881-dd7f832dc480.1610942c5da764f2], Reason = [Started], Message = [Started container filler-pod-a0941e31-b222-46f0-b881-dd7f832dc480] STEP: Considering event: Type = [Warning], Name = [additional-pod.1610942cd977c3a5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.1610942cdbde031d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:13:09.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7960" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.424 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":288,"completed":111,"skipped":1706,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:13:09.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-5c0be69f-5346-459d-b994-eeb55273cb37 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-5c0be69f-5346-459d-b994-eeb55273cb37 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:13:16.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2719" for this suite. • [SLOW TEST:6.231 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":112,"skipped":1716,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:13:16.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-wkjf STEP: Creating a pod to test atomic-volume-subpath May 20 00:13:16.211: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wkjf" in namespace "subpath-9280" to be "Succeeded or Failed" May 20 00:13:16.238: INFO: Pod "pod-subpath-test-configmap-wkjf": Phase="Pending", Reason="", readiness=false. Elapsed: 27.294837ms May 20 00:13:18.252: INFO: Pod "pod-subpath-test-configmap-wkjf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040748077s May 20 00:13:20.269: INFO: Pod "pod-subpath-test-configmap-wkjf": Phase="Running", Reason="", readiness=true. Elapsed: 4.057488013s May 20 00:13:22.287: INFO: Pod "pod-subpath-test-configmap-wkjf": Phase="Running", Reason="", readiness=true. Elapsed: 6.075927694s May 20 00:13:24.291: INFO: Pod "pod-subpath-test-configmap-wkjf": Phase="Running", Reason="", readiness=true. Elapsed: 8.080227548s May 20 00:13:26.295: INFO: Pod "pod-subpath-test-configmap-wkjf": Phase="Running", Reason="", readiness=true. Elapsed: 10.084325522s May 20 00:13:28.299: INFO: Pod "pod-subpath-test-configmap-wkjf": Phase="Running", Reason="", readiness=true. Elapsed: 12.087506786s May 20 00:13:30.306: INFO: Pod "pod-subpath-test-configmap-wkjf": Phase="Running", Reason="", readiness=true. Elapsed: 14.094741794s May 20 00:13:32.310: INFO: Pod "pod-subpath-test-configmap-wkjf": Phase="Running", Reason="", readiness=true. Elapsed: 16.099250933s May 20 00:13:34.315: INFO: Pod "pod-subpath-test-configmap-wkjf": Phase="Running", Reason="", readiness=true. Elapsed: 18.103552039s May 20 00:13:36.319: INFO: Pod "pod-subpath-test-configmap-wkjf": Phase="Running", Reason="", readiness=true. Elapsed: 20.107490739s May 20 00:13:38.347: INFO: Pod "pod-subpath-test-configmap-wkjf": Phase="Running", Reason="", readiness=true. Elapsed: 22.135583518s May 20 00:13:40.350: INFO: Pod "pod-subpath-test-configmap-wkjf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.139111303s STEP: Saw pod success May 20 00:13:40.350: INFO: Pod "pod-subpath-test-configmap-wkjf" satisfied condition "Succeeded or Failed" May 20 00:13:40.353: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-wkjf container test-container-subpath-configmap-wkjf: STEP: delete the pod May 20 00:13:40.381: INFO: Waiting for pod pod-subpath-test-configmap-wkjf to disappear May 20 00:13:40.400: INFO: Pod pod-subpath-test-configmap-wkjf no longer exists STEP: Deleting pod pod-subpath-test-configmap-wkjf May 20 00:13:40.400: INFO: Deleting pod "pod-subpath-test-configmap-wkjf" in namespace "subpath-9280" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:13:40.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9280" for this suite. • [SLOW TEST:24.309 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":288,"completed":113,"skipped":1728,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:13:40.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:13:45.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2445" for this suite. • [SLOW TEST:5.171 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":288,"completed":114,"skipped":1790,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:13:45.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-56ea1e55-bfd1-4f8e-b67e-505268e8bea4 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:13:51.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2698" for this suite. • [SLOW TEST:6.151 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":115,"skipped":1796,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:13:51.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4215 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-4215 I0520 00:13:51.940981 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4215, replica count: 2 I0520 00:13:54.991544 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 00:13:57.991747 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 00:13:57.991: INFO: Creating new exec pod May 20 00:14:03.037: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4215 execpodzjk45 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 20 00:14:05.872: INFO: stderr: "I0520 00:14:05.808689 1477 log.go:172] (0xc000656000) (0xc00070ed20) Create stream\nI0520 00:14:05.808739 1477 log.go:172] (0xc000656000) (0xc00070ed20) Stream added, broadcasting: 1\nI0520 00:14:05.810596 1477 log.go:172] (0xc000656000) Reply frame received for 1\nI0520 00:14:05.810642 1477 log.go:172] (0xc000656000) (0xc0006fa5a0) Create stream\nI0520 00:14:05.810655 1477 log.go:172] (0xc000656000) (0xc0006fa5a0) Stream added, broadcasting: 3\nI0520 00:14:05.811406 1477 log.go:172] (0xc000656000) Reply frame received for 3\nI0520 00:14:05.811437 1477 log.go:172] (0xc000656000) (0xc0006f25a0) Create stream\nI0520 00:14:05.811448 1477 log.go:172] (0xc000656000) (0xc0006f25a0) Stream added, broadcasting: 5\nI0520 00:14:05.812224 1477 log.go:172] (0xc000656000) Reply frame received for 5\nI0520 00:14:05.862969 1477 log.go:172] (0xc000656000) Data frame received for 5\nI0520 00:14:05.862998 1477 log.go:172] (0xc0006f25a0) (5) Data frame handling\nI0520 00:14:05.863016 1477 log.go:172] (0xc0006f25a0) (5) Data frame sent\nI0520 00:14:05.863028 1477 log.go:172] (0xc000656000) Data frame received for 5\nI0520 00:14:05.863035 1477 log.go:172] (0xc0006f25a0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0520 00:14:05.863055 1477 log.go:172] (0xc0006f25a0) (5) Data frame sent\nI0520 00:14:05.863432 1477 log.go:172] (0xc000656000) Data frame received for 3\nI0520 00:14:05.863473 1477 log.go:172] (0xc0006fa5a0) (3) Data frame handling\nI0520 00:14:05.863507 1477 log.go:172] (0xc000656000) Data frame received for 5\nI0520 00:14:05.863521 1477 log.go:172] (0xc0006f25a0) (5) Data frame handling\nI0520 00:14:05.865012 1477 log.go:172] (0xc000656000) Data frame received for 1\nI0520 00:14:05.865030 1477 log.go:172] (0xc00070ed20) (1) Data frame handling\nI0520 00:14:05.865038 1477 log.go:172] (0xc00070ed20) (1) Data frame sent\nI0520 00:14:05.865046 1477 log.go:172] (0xc000656000) (0xc00070ed20) Stream removed, broadcasting: 1\nI0520 00:14:05.865054 1477 log.go:172] (0xc000656000) Go away received\nI0520 00:14:05.865588 1477 log.go:172] (0xc000656000) (0xc00070ed20) Stream removed, broadcasting: 1\nI0520 00:14:05.865612 1477 log.go:172] (0xc000656000) (0xc0006fa5a0) Stream removed, broadcasting: 3\nI0520 00:14:05.865621 1477 log.go:172] (0xc000656000) (0xc0006f25a0) Stream removed, broadcasting: 5\n" May 20 00:14:05.872: INFO: stdout: "" May 20 00:14:05.873: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4215 execpodzjk45 -- /bin/sh -x -c nc -zv -t -w 2 10.99.118.249 80' May 20 00:14:06.117: INFO: stderr: "I0520 00:14:06.047460 1506 log.go:172] (0xc000ac0790) (0xc000bc0280) Create stream\nI0520 00:14:06.047515 1506 log.go:172] (0xc000ac0790) (0xc000bc0280) Stream added, broadcasting: 1\nI0520 00:14:06.049510 1506 log.go:172] (0xc000ac0790) Reply frame received for 1\nI0520 00:14:06.049530 1506 log.go:172] (0xc000ac0790) (0xc000bc03c0) Create stream\nI0520 00:14:06.049535 1506 log.go:172] (0xc000ac0790) (0xc000bc03c0) Stream added, broadcasting: 3\nI0520 00:14:06.050107 1506 log.go:172] (0xc000ac0790) Reply frame received for 3\nI0520 00:14:06.050127 1506 log.go:172] (0xc000ac0790) (0xc000bc0500) Create stream\nI0520 00:14:06.050134 1506 log.go:172] (0xc000ac0790) (0xc000bc0500) Stream added, broadcasting: 5\nI0520 00:14:06.050745 1506 log.go:172] (0xc000ac0790) Reply frame received for 5\nI0520 00:14:06.109917 1506 log.go:172] (0xc000ac0790) Data frame received for 3\nI0520 00:14:06.109974 1506 log.go:172] (0xc000bc03c0) (3) Data frame handling\nI0520 00:14:06.110017 1506 log.go:172] (0xc000ac0790) Data frame received for 5\nI0520 00:14:06.110054 1506 log.go:172] (0xc000bc0500) (5) Data frame handling\nI0520 00:14:06.110077 1506 log.go:172] (0xc000bc0500) (5) Data frame sent\nI0520 00:14:06.110098 1506 log.go:172] (0xc000ac0790) Data frame received for 5\nI0520 00:14:06.110133 1506 log.go:172] (0xc000bc0500) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.118.249 80\nConnection to 10.99.118.249 80 port [tcp/http] succeeded!\nI0520 00:14:06.111520 1506 log.go:172] (0xc000ac0790) Data frame received for 1\nI0520 00:14:06.111551 1506 log.go:172] (0xc000bc0280) (1) Data frame handling\nI0520 00:14:06.111574 1506 log.go:172] (0xc000bc0280) (1) Data frame sent\nI0520 00:14:06.111762 1506 log.go:172] (0xc000ac0790) (0xc000bc0280) Stream removed, broadcasting: 1\nI0520 00:14:06.111959 1506 log.go:172] (0xc000ac0790) Go away received\nI0520 00:14:06.112137 1506 log.go:172] (0xc000ac0790) (0xc000bc0280) Stream removed, broadcasting: 1\nI0520 00:14:06.112157 1506 log.go:172] (0xc000ac0790) (0xc000bc03c0) Stream removed, broadcasting: 3\nI0520 00:14:06.112167 1506 log.go:172] (0xc000ac0790) (0xc000bc0500) Stream removed, broadcasting: 5\n" May 20 00:14:06.117: INFO: stdout: "" May 20 00:14:06.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4215 execpodzjk45 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32354' May 20 00:14:06.345: INFO: stderr: "I0520 00:14:06.254508 1527 log.go:172] (0xc000ade000) (0xc0004e0140) Create stream\nI0520 00:14:06.254718 1527 log.go:172] (0xc000ade000) (0xc0004e0140) Stream added, broadcasting: 1\nI0520 00:14:06.258097 1527 log.go:172] (0xc000ade000) Reply frame received for 1\nI0520 00:14:06.258155 1527 log.go:172] (0xc000ade000) (0xc000600fa0) Create stream\nI0520 00:14:06.258180 1527 log.go:172] (0xc000ade000) (0xc000600fa0) Stream added, broadcasting: 3\nI0520 00:14:06.259265 1527 log.go:172] (0xc000ade000) Reply frame received for 3\nI0520 00:14:06.259298 1527 log.go:172] (0xc000ade000) (0xc000601540) Create stream\nI0520 00:14:06.259308 1527 log.go:172] (0xc000ade000) (0xc000601540) Stream added, broadcasting: 5\nI0520 00:14:06.260349 1527 log.go:172] (0xc000ade000) Reply frame received for 5\nI0520 00:14:06.336257 1527 log.go:172] (0xc000ade000) Data frame received for 5\nI0520 00:14:06.336283 1527 log.go:172] (0xc000601540) (5) Data frame handling\nI0520 00:14:06.336293 1527 log.go:172] (0xc000601540) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 32354\nConnection to 172.17.0.13 32354 port [tcp/32354] succeeded!\nI0520 00:14:06.336344 1527 log.go:172] (0xc000ade000) Data frame received for 3\nI0520 00:14:06.336350 1527 log.go:172] (0xc000600fa0) (3) Data frame handling\nI0520 00:14:06.336368 1527 log.go:172] (0xc000ade000) Data frame received for 5\nI0520 00:14:06.336382 1527 log.go:172] (0xc000601540) (5) Data frame handling\nI0520 00:14:06.338010 1527 log.go:172] (0xc000ade000) Data frame received for 1\nI0520 00:14:06.338031 1527 log.go:172] (0xc0004e0140) (1) Data frame handling\nI0520 00:14:06.338042 1527 log.go:172] (0xc0004e0140) (1) Data frame sent\nI0520 00:14:06.338061 1527 log.go:172] (0xc000ade000) (0xc0004e0140) Stream removed, broadcasting: 1\nI0520 00:14:06.338135 1527 log.go:172] (0xc000ade000) Go away received\nI0520 00:14:06.340527 1527 log.go:172] (0xc000ade000) (0xc0004e0140) Stream removed, broadcasting: 1\nI0520 00:14:06.340769 1527 log.go:172] (0xc000ade000) (0xc000600fa0) Stream removed, broadcasting: 3\nI0520 00:14:06.340787 1527 log.go:172] (0xc000ade000) (0xc000601540) Stream removed, broadcasting: 5\n" May 20 00:14:06.345: INFO: stdout: "" May 20 00:14:06.345: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4215 execpodzjk45 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32354' May 20 00:14:06.551: INFO: stderr: "I0520 00:14:06.472669 1546 log.go:172] (0xc0009331e0) (0xc000a02820) Create stream\nI0520 00:14:06.472716 1546 log.go:172] (0xc0009331e0) (0xc000a02820) Stream added, broadcasting: 1\nI0520 00:14:06.476877 1546 log.go:172] (0xc0009331e0) Reply frame received for 1\nI0520 00:14:06.476917 1546 log.go:172] (0xc0009331e0) (0xc000a02000) Create stream\nI0520 00:14:06.476927 1546 log.go:172] (0xc0009331e0) (0xc000a02000) Stream added, broadcasting: 3\nI0520 00:14:06.478079 1546 log.go:172] (0xc0009331e0) Reply frame received for 3\nI0520 00:14:06.478145 1546 log.go:172] (0xc0009331e0) (0xc0005c8280) Create stream\nI0520 00:14:06.478167 1546 log.go:172] (0xc0009331e0) (0xc0005c8280) Stream added, broadcasting: 5\nI0520 00:14:06.479242 1546 log.go:172] (0xc0009331e0) Reply frame received for 5\nI0520 00:14:06.544250 1546 log.go:172] (0xc0009331e0) Data frame received for 3\nI0520 00:14:06.544290 1546 log.go:172] (0xc000a02000) (3) Data frame handling\nI0520 00:14:06.544322 1546 log.go:172] (0xc0009331e0) Data frame received for 5\nI0520 00:14:06.544341 1546 log.go:172] (0xc0005c8280) (5) Data frame handling\nI0520 00:14:06.544359 1546 log.go:172] (0xc0005c8280) (5) Data frame sent\nI0520 00:14:06.544370 1546 log.go:172] (0xc0009331e0) Data frame received for 5\nI0520 00:14:06.544385 1546 log.go:172] (0xc0005c8280) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32354\nConnection to 172.17.0.12 32354 port [tcp/32354] succeeded!\nI0520 00:14:06.545885 1546 log.go:172] (0xc0009331e0) Data frame received for 1\nI0520 00:14:06.545914 1546 log.go:172] (0xc000a02820) (1) Data frame handling\nI0520 00:14:06.545931 1546 log.go:172] (0xc000a02820) (1) Data frame sent\nI0520 00:14:06.545952 1546 log.go:172] (0xc0009331e0) (0xc000a02820) Stream removed, broadcasting: 1\nI0520 00:14:06.545971 1546 log.go:172] (0xc0009331e0) Go away received\nI0520 00:14:06.546303 1546 log.go:172] (0xc0009331e0) (0xc000a02820) Stream removed, broadcasting: 1\nI0520 00:14:06.546320 1546 log.go:172] (0xc0009331e0) (0xc000a02000) Stream removed, broadcasting: 3\nI0520 00:14:06.546332 1546 log.go:172] (0xc0009331e0) (0xc0005c8280) Stream removed, broadcasting: 5\n" May 20 00:14:06.551: INFO: stdout: "" May 20 00:14:06.551: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:14:06.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4215" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:14.875 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":288,"completed":116,"skipped":1823,"failed":0} SSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:14:06.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:14:06.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-107" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":288,"completed":117,"skipped":1826,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:14:06.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 00:14:07.426: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 00:14:09.575: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725530447, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725530447, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725530447, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725530447, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 00:14:12.610: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:14:12.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:14:13.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8729" for this suite. STEP: Destroying namespace "webhook-8729-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.234 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":288,"completed":118,"skipped":1845,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:14:13.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 20 00:14:14.062: INFO: Waiting up to 5m0s for pod "downwardapi-volume-83c14813-044f-43d6-b52c-bd42d8bd2e3a" in namespace "downward-api-5689" to be "Succeeded or Failed" May 20 00:14:14.081: INFO: Pod "downwardapi-volume-83c14813-044f-43d6-b52c-bd42d8bd2e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.059799ms May 20 00:14:16.231: INFO: Pod "downwardapi-volume-83c14813-044f-43d6-b52c-bd42d8bd2e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16889666s May 20 00:14:18.246: INFO: Pod "downwardapi-volume-83c14813-044f-43d6-b52c-bd42d8bd2e3a": Phase="Running", Reason="", readiness=true. Elapsed: 4.184231552s May 20 00:14:20.251: INFO: Pod "downwardapi-volume-83c14813-044f-43d6-b52c-bd42d8bd2e3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.188940629s STEP: Saw pod success May 20 00:14:20.251: INFO: Pod "downwardapi-volume-83c14813-044f-43d6-b52c-bd42d8bd2e3a" satisfied condition "Succeeded or Failed" May 20 00:14:20.254: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-83c14813-044f-43d6-b52c-bd42d8bd2e3a container client-container: STEP: delete the pod May 20 00:14:20.347: INFO: Waiting for pod downwardapi-volume-83c14813-044f-43d6-b52c-bd42d8bd2e3a to disappear May 20 00:14:20.350: INFO: Pod downwardapi-volume-83c14813-044f-43d6-b52c-bd42d8bd2e3a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:14:20.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5689" for this suite. • [SLOW TEST:6.430 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":119,"skipped":1854,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:14:20.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:14:20.485: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 20 00:14:20.508: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:20.522: INFO: Number of nodes with available pods: 0 May 20 00:14:20.523: INFO: Node latest-worker is running more than one daemon pod May 20 00:14:21.528: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:21.532: INFO: Number of nodes with available pods: 0 May 20 00:14:21.532: INFO: Node latest-worker is running more than one daemon pod May 20 00:14:22.565: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:22.626: INFO: Number of nodes with available pods: 0 May 20 00:14:22.626: INFO: Node latest-worker is running more than one daemon pod May 20 00:14:23.527: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:23.531: INFO: Number of nodes with available pods: 0 May 20 00:14:23.531: INFO: Node latest-worker is running more than one daemon pod May 20 00:14:24.528: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:24.551: INFO: Number of nodes with available pods: 2 May 20 00:14:24.551: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 20 00:14:24.636: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:24.636: INFO: Wrong image for pod: daemon-set-ww9vt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:24.687: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:25.691: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:25.692: INFO: Wrong image for pod: daemon-set-ww9vt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:25.697: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:26.691: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:26.691: INFO: Wrong image for pod: daemon-set-ww9vt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:26.695: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:27.693: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:27.693: INFO: Wrong image for pod: daemon-set-ww9vt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:27.699: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:28.692: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:28.692: INFO: Wrong image for pod: daemon-set-ww9vt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:28.692: INFO: Pod daemon-set-ww9vt is not available May 20 00:14:28.697: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:29.692: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:29.692: INFO: Wrong image for pod: daemon-set-ww9vt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:29.692: INFO: Pod daemon-set-ww9vt is not available May 20 00:14:29.697: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:30.692: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:30.692: INFO: Wrong image for pod: daemon-set-ww9vt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:30.692: INFO: Pod daemon-set-ww9vt is not available May 20 00:14:30.696: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:31.691: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:31.691: INFO: Wrong image for pod: daemon-set-ww9vt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:31.691: INFO: Pod daemon-set-ww9vt is not available May 20 00:14:31.696: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:32.691: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:32.691: INFO: Wrong image for pod: daemon-set-ww9vt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:32.691: INFO: Pod daemon-set-ww9vt is not available May 20 00:14:32.695: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:33.692: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:33.692: INFO: Wrong image for pod: daemon-set-ww9vt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:33.692: INFO: Pod daemon-set-ww9vt is not available May 20 00:14:33.697: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:34.691: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:34.691: INFO: Wrong image for pod: daemon-set-ww9vt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:34.691: INFO: Pod daemon-set-ww9vt is not available May 20 00:14:34.696: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:35.692: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:35.692: INFO: Pod daemon-set-sw27c is not available May 20 00:14:35.696: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:36.691: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:36.691: INFO: Pod daemon-set-sw27c is not available May 20 00:14:36.695: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:37.761: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:37.765: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:38.692: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:38.697: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:39.692: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:39.692: INFO: Pod daemon-set-qcdtf is not available May 20 00:14:39.697: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:40.692: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:40.692: INFO: Pod daemon-set-qcdtf is not available May 20 00:14:40.698: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:41.692: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:41.692: INFO: Pod daemon-set-qcdtf is not available May 20 00:14:41.697: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:42.691: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:42.691: INFO: Pod daemon-set-qcdtf is not available May 20 00:14:42.696: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:43.698: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:43.698: INFO: Pod daemon-set-qcdtf is not available May 20 00:14:43.702: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:44.692: INFO: Wrong image for pod: daemon-set-qcdtf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 00:14:44.692: INFO: Pod daemon-set-qcdtf is not available May 20 00:14:44.697: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:45.690: INFO: Pod daemon-set-5fr6p is not available May 20 00:14:45.694: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 20 00:14:45.721: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:45.733: INFO: Number of nodes with available pods: 1 May 20 00:14:45.733: INFO: Node latest-worker2 is running more than one daemon pod May 20 00:14:46.744: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:46.748: INFO: Number of nodes with available pods: 1 May 20 00:14:46.748: INFO: Node latest-worker2 is running more than one daemon pod May 20 00:14:47.764: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:47.822: INFO: Number of nodes with available pods: 1 May 20 00:14:47.822: INFO: Node latest-worker2 is running more than one daemon pod May 20 00:14:48.739: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:14:48.744: INFO: Number of nodes with available pods: 2 May 20 00:14:48.744: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7016, will wait for the garbage collector to delete the pods May 20 00:14:48.819: INFO: Deleting DaemonSet.extensions daemon-set took: 6.406173ms May 20 00:14:49.119: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.260679ms May 20 00:14:55.323: INFO: Number of nodes with available pods: 0 May 20 00:14:55.323: INFO: Number of running nodes: 0, number of available pods: 0 May 20 00:14:55.325: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7016/daemonsets","resourceVersion":"6084668"},"items":null} May 20 00:14:55.327: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7016/pods","resourceVersion":"6084668"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:14:55.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7016" for this suite. • [SLOW TEST:34.988 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":288,"completed":120,"skipped":1913,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:14:55.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-cnw6 STEP: Creating a pod to test atomic-volume-subpath May 20 00:14:55.494: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-cnw6" in namespace "subpath-551" to be "Succeeded or Failed" May 20 00:14:55.501: INFO: Pod "pod-subpath-test-projected-cnw6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.88219ms May 20 00:14:57.506: INFO: Pod "pod-subpath-test-projected-cnw6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011749323s May 20 00:14:59.514: INFO: Pod "pod-subpath-test-projected-cnw6": Phase="Running", Reason="", readiness=true. Elapsed: 4.019781797s May 20 00:15:01.518: INFO: Pod "pod-subpath-test-projected-cnw6": Phase="Running", Reason="", readiness=true. Elapsed: 6.024186136s May 20 00:15:03.523: INFO: Pod "pod-subpath-test-projected-cnw6": Phase="Running", Reason="", readiness=true. Elapsed: 8.028955288s May 20 00:15:05.527: INFO: Pod "pod-subpath-test-projected-cnw6": Phase="Running", Reason="", readiness=true. Elapsed: 10.033012682s May 20 00:15:07.531: INFO: Pod "pod-subpath-test-projected-cnw6": Phase="Running", Reason="", readiness=true. Elapsed: 12.036602322s May 20 00:15:09.535: INFO: Pod "pod-subpath-test-projected-cnw6": Phase="Running", Reason="", readiness=true. Elapsed: 14.040767006s May 20 00:15:11.539: INFO: Pod "pod-subpath-test-projected-cnw6": Phase="Running", Reason="", readiness=true. Elapsed: 16.045257184s May 20 00:15:13.544: INFO: Pod "pod-subpath-test-projected-cnw6": Phase="Running", Reason="", readiness=true. Elapsed: 18.049980709s May 20 00:15:15.548: INFO: Pod "pod-subpath-test-projected-cnw6": Phase="Running", Reason="", readiness=true. Elapsed: 20.053347541s May 20 00:15:17.552: INFO: Pod "pod-subpath-test-projected-cnw6": Phase="Running", Reason="", readiness=true. Elapsed: 22.05762309s May 20 00:15:19.556: INFO: Pod "pod-subpath-test-projected-cnw6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.061680465s STEP: Saw pod success May 20 00:15:19.556: INFO: Pod "pod-subpath-test-projected-cnw6" satisfied condition "Succeeded or Failed" May 20 00:15:19.559: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-cnw6 container test-container-subpath-projected-cnw6: STEP: delete the pod May 20 00:15:19.592: INFO: Waiting for pod pod-subpath-test-projected-cnw6 to disappear May 20 00:15:19.602: INFO: Pod pod-subpath-test-projected-cnw6 no longer exists STEP: Deleting pod pod-subpath-test-projected-cnw6 May 20 00:15:19.602: INFO: Deleting pod "pod-subpath-test-projected-cnw6" in namespace "subpath-551" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:15:19.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-551" for this suite. • [SLOW TEST:24.265 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":288,"completed":121,"skipped":1926,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:15:19.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 20 00:15:19.699: INFO: Waiting up to 5m0s for pod "pod-8c71b74b-4179-4d95-b620-78e40e18ba74" in namespace "emptydir-5260" to be "Succeeded or Failed" May 20 00:15:19.776: INFO: Pod "pod-8c71b74b-4179-4d95-b620-78e40e18ba74": Phase="Pending", Reason="", readiness=false. Elapsed: 76.972223ms May 20 00:15:21.780: INFO: Pod "pod-8c71b74b-4179-4d95-b620-78e40e18ba74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081016259s May 20 00:15:23.827: INFO: Pod "pod-8c71b74b-4179-4d95-b620-78e40e18ba74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.127750779s STEP: Saw pod success May 20 00:15:23.827: INFO: Pod "pod-8c71b74b-4179-4d95-b620-78e40e18ba74" satisfied condition "Succeeded or Failed" May 20 00:15:23.830: INFO: Trying to get logs from node latest-worker pod pod-8c71b74b-4179-4d95-b620-78e40e18ba74 container test-container: STEP: delete the pod May 20 00:15:23.881: INFO: Waiting for pod pod-8c71b74b-4179-4d95-b620-78e40e18ba74 to disappear May 20 00:15:23.982: INFO: Pod pod-8c71b74b-4179-4d95-b620-78e40e18ba74 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:15:23.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5260" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":122,"skipped":1964,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:15:23.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-91304365-45f5-43d7-8472-5bdb8ad166f5 STEP: Creating secret with name s-test-opt-upd-715dbfc6-fa0d-4b0b-916a-0430c8a26fda STEP: Creating the pod STEP: Deleting secret s-test-opt-del-91304365-45f5-43d7-8472-5bdb8ad166f5 STEP: Updating secret s-test-opt-upd-715dbfc6-fa0d-4b0b-916a-0430c8a26fda STEP: Creating secret with name s-test-opt-create-ba1bcf2c-067a-4eee-aae3-c5e5418596bb STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:15:32.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2342" for this suite. • [SLOW TEST:8.275 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":123,"skipped":1971,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:15:32.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 20 00:15:32.359: INFO: Waiting up to 5m0s for pod "downwardapi-volume-469b4ce0-c768-4828-8ba5-459b5e5b15a2" in namespace "projected-4638" to be "Succeeded or Failed" May 20 00:15:32.374: INFO: Pod "downwardapi-volume-469b4ce0-c768-4828-8ba5-459b5e5b15a2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.412641ms May 20 00:15:34.379: INFO: Pod "downwardapi-volume-469b4ce0-c768-4828-8ba5-459b5e5b15a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019975711s May 20 00:15:36.383: INFO: Pod "downwardapi-volume-469b4ce0-c768-4828-8ba5-459b5e5b15a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024243161s STEP: Saw pod success May 20 00:15:36.383: INFO: Pod "downwardapi-volume-469b4ce0-c768-4828-8ba5-459b5e5b15a2" satisfied condition "Succeeded or Failed" May 20 00:15:36.386: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-469b4ce0-c768-4828-8ba5-459b5e5b15a2 container client-container: STEP: delete the pod May 20 00:15:36.424: INFO: Waiting for pod downwardapi-volume-469b4ce0-c768-4828-8ba5-459b5e5b15a2 to disappear May 20 00:15:36.434: INFO: Pod downwardapi-volume-469b4ce0-c768-4828-8ba5-459b5e5b15a2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:15:36.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4638" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":124,"skipped":1986,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:15:36.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311 STEP: creating the pod May 20 00:15:36.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3843' May 20 00:15:36.842: INFO: stderr: "" May 20 00:15:36.842: INFO: stdout: "pod/pause created\n" May 20 00:15:36.842: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 20 00:15:36.842: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3843" to be "running and ready" May 20 00:15:36.848: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.530542ms May 20 00:15:38.875: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032370087s May 20 00:15:40.879: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.03657821s May 20 00:15:40.879: INFO: Pod "pause" satisfied condition "running and ready" May 20 00:15:40.879: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod May 20 00:15:40.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3843' May 20 00:15:40.997: INFO: stderr: "" May 20 00:15:40.997: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 20 00:15:40.997: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3843' May 20 00:15:41.097: INFO: stderr: "" May 20 00:15:41.097: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod May 20 00:15:41.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3843' May 20 00:15:41.201: INFO: stderr: "" May 20 00:15:41.201: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 20 00:15:41.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3843' May 20 00:15:41.318: INFO: stderr: "" May 20 00:15:41.318: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: using delete to clean up resources May 20 00:15:41.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3843' May 20 00:15:41.449: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 00:15:41.449: INFO: stdout: "pod \"pause\" force deleted\n" May 20 00:15:41.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3843' May 20 00:15:41.726: INFO: stderr: "No resources found in kubectl-3843 namespace.\n" May 20 00:15:41.726: INFO: stdout: "" May 20 00:15:41.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3843 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 20 00:15:41.880: INFO: stderr: "" May 20 00:15:41.880: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:15:41.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3843" for this suite. • [SLOW TEST:5.448 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":288,"completed":125,"skipped":1994,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:15:41.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:16:41.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6295" for this suite. • [SLOW TEST:60.093 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":288,"completed":126,"skipped":2010,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:16:41.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 20 00:16:42.044: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3832' May 20 00:16:42.305: INFO: stderr: "" May 20 00:16:42.305: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 20 00:16:42.305: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3832' May 20 00:16:42.464: INFO: stderr: "" May 20 00:16:42.464: INFO: stdout: "update-demo-nautilus-2648q update-demo-nautilus-jvv6x " May 20 00:16:42.464: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2648q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3832' May 20 00:16:42.569: INFO: stderr: "" May 20 00:16:42.569: INFO: stdout: "" May 20 00:16:42.569: INFO: update-demo-nautilus-2648q is created but not running May 20 00:16:47.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3832' May 20 00:16:47.672: INFO: stderr: "" May 20 00:16:47.672: INFO: stdout: "update-demo-nautilus-2648q update-demo-nautilus-jvv6x " May 20 00:16:47.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2648q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3832' May 20 00:16:47.769: INFO: stderr: "" May 20 00:16:47.769: INFO: stdout: "true" May 20 00:16:47.769: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2648q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3832' May 20 00:16:47.865: INFO: stderr: "" May 20 00:16:47.865: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 00:16:47.865: INFO: validating pod update-demo-nautilus-2648q May 20 00:16:47.905: INFO: got data: { "image": "nautilus.jpg" } May 20 00:16:47.905: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 00:16:47.905: INFO: update-demo-nautilus-2648q is verified up and running May 20 00:16:47.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvv6x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3832' May 20 00:16:48.013: INFO: stderr: "" May 20 00:16:48.013: INFO: stdout: "true" May 20 00:16:48.013: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvv6x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3832' May 20 00:16:48.115: INFO: stderr: "" May 20 00:16:48.115: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 00:16:48.115: INFO: validating pod update-demo-nautilus-jvv6x May 20 00:16:48.118: INFO: got data: { "image": "nautilus.jpg" } May 20 00:16:48.118: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 00:16:48.118: INFO: update-demo-nautilus-jvv6x is verified up and running STEP: scaling down the replication controller May 20 00:16:48.201: INFO: scanned /root for discovery docs: May 20 00:16:48.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3832' May 20 00:16:49.425: INFO: stderr: "" May 20 00:16:49.425: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 20 00:16:49.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3832' May 20 00:16:49.538: INFO: stderr: "" May 20 00:16:49.538: INFO: stdout: "update-demo-nautilus-2648q update-demo-nautilus-jvv6x " STEP: Replicas for name=update-demo: expected=1 actual=2 May 20 00:16:54.539: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3832' May 20 00:16:54.654: INFO: stderr: "" May 20 00:16:54.654: INFO: stdout: "update-demo-nautilus-2648q update-demo-nautilus-jvv6x " STEP: Replicas for name=update-demo: expected=1 actual=2 May 20 00:16:59.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3832' May 20 00:16:59.766: INFO: stderr: "" May 20 00:16:59.766: INFO: stdout: "update-demo-nautilus-jvv6x " May 20 00:16:59.766: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvv6x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3832' May 20 00:16:59.862: INFO: stderr: "" May 20 00:16:59.862: INFO: stdout: "true" May 20 00:16:59.862: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvv6x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3832' May 20 00:16:59.950: INFO: stderr: "" May 20 00:16:59.950: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 00:16:59.950: INFO: validating pod update-demo-nautilus-jvv6x May 20 00:16:59.953: INFO: got data: { "image": "nautilus.jpg" } May 20 00:16:59.953: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 00:16:59.953: INFO: update-demo-nautilus-jvv6x is verified up and running STEP: scaling up the replication controller May 20 00:16:59.955: INFO: scanned /root for discovery docs: May 20 00:16:59.955: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3832' May 20 00:17:01.084: INFO: stderr: "" May 20 00:17:01.084: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 20 00:17:01.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3832' May 20 00:17:01.199: INFO: stderr: "" May 20 00:17:01.200: INFO: stdout: "update-demo-nautilus-9vx5k update-demo-nautilus-jvv6x " May 20 00:17:01.200: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9vx5k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3832' May 20 00:17:01.296: INFO: stderr: "" May 20 00:17:01.296: INFO: stdout: "" May 20 00:17:01.296: INFO: update-demo-nautilus-9vx5k is created but not running May 20 00:17:06.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3832' May 20 00:17:06.411: INFO: stderr: "" May 20 00:17:06.411: INFO: stdout: "update-demo-nautilus-9vx5k update-demo-nautilus-jvv6x " May 20 00:17:06.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9vx5k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3832' May 20 00:17:06.509: INFO: stderr: "" May 20 00:17:06.509: INFO: stdout: "true" May 20 00:17:06.509: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9vx5k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3832' May 20 00:17:06.601: INFO: stderr: "" May 20 00:17:06.601: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 00:17:06.601: INFO: validating pod update-demo-nautilus-9vx5k May 20 00:17:06.614: INFO: got data: { "image": "nautilus.jpg" } May 20 00:17:06.614: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 00:17:06.614: INFO: update-demo-nautilus-9vx5k is verified up and running May 20 00:17:06.614: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvv6x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3832' May 20 00:17:06.725: INFO: stderr: "" May 20 00:17:06.725: INFO: stdout: "true" May 20 00:17:06.725: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvv6x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3832' May 20 00:17:06.840: INFO: stderr: "" May 20 00:17:06.840: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 00:17:06.840: INFO: validating pod update-demo-nautilus-jvv6x May 20 00:17:06.843: INFO: got data: { "image": "nautilus.jpg" } May 20 00:17:06.843: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 00:17:06.843: INFO: update-demo-nautilus-jvv6x is verified up and running STEP: using delete to clean up resources May 20 00:17:06.843: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3832' May 20 00:17:06.971: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 00:17:06.971: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 20 00:17:06.971: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3832' May 20 00:17:07.095: INFO: stderr: "No resources found in kubectl-3832 namespace.\n" May 20 00:17:07.095: INFO: stdout: "" May 20 00:17:07.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3832 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 20 00:17:07.205: INFO: stderr: "" May 20 00:17:07.205: INFO: stdout: "update-demo-nautilus-9vx5k\nupdate-demo-nautilus-jvv6x\n" May 20 00:17:07.705: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3832' May 20 00:17:07.811: INFO: stderr: "No resources found in kubectl-3832 namespace.\n" May 20 00:17:07.811: INFO: stdout: "" May 20 00:17:07.811: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3832 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 20 00:17:07.902: INFO: stderr: "" May 20 00:17:07.902: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:17:07.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3832" for this suite. • [SLOW TEST:25.926 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":288,"completed":127,"skipped":2013,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:17:07.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 20 00:17:08.198: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:17:15.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5539" for this suite. • [SLOW TEST:7.882 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":288,"completed":128,"skipped":2030,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:17:15.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs May 20 00:17:15.873: INFO: Waiting up to 5m0s for pod "pod-9f6367b8-685a-477b-9b48-59914137d8ed" in namespace "emptydir-8875" to be "Succeeded or Failed" May 20 00:17:15.887: INFO: Pod "pod-9f6367b8-685a-477b-9b48-59914137d8ed": Phase="Pending", Reason="", readiness=false. Elapsed: 13.246576ms May 20 00:17:17.953: INFO: Pod "pod-9f6367b8-685a-477b-9b48-59914137d8ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079750675s May 20 00:17:19.957: INFO: Pod "pod-9f6367b8-685a-477b-9b48-59914137d8ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083934359s STEP: Saw pod success May 20 00:17:19.957: INFO: Pod "pod-9f6367b8-685a-477b-9b48-59914137d8ed" satisfied condition "Succeeded or Failed" May 20 00:17:19.960: INFO: Trying to get logs from node latest-worker2 pod pod-9f6367b8-685a-477b-9b48-59914137d8ed container test-container: STEP: delete the pod May 20 00:17:20.161: INFO: Waiting for pod pod-9f6367b8-685a-477b-9b48-59914137d8ed to disappear May 20 00:17:20.168: INFO: Pod pod-9f6367b8-685a-477b-9b48-59914137d8ed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:17:20.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8875" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":129,"skipped":2069,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:17:20.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 20 00:17:20.332: INFO: Waiting up to 5m0s for pod "downwardapi-volume-62661199-e7d4-4605-af7d-4cc7cc579ac1" in namespace "projected-358" to be "Succeeded or Failed" May 20 00:17:20.336: INFO: Pod "downwardapi-volume-62661199-e7d4-4605-af7d-4cc7cc579ac1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.465724ms May 20 00:17:22.341: INFO: Pod "downwardapi-volume-62661199-e7d4-4605-af7d-4cc7cc579ac1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008524651s May 20 00:17:24.345: INFO: Pod "downwardapi-volume-62661199-e7d4-4605-af7d-4cc7cc579ac1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012636977s STEP: Saw pod success May 20 00:17:24.345: INFO: Pod "downwardapi-volume-62661199-e7d4-4605-af7d-4cc7cc579ac1" satisfied condition "Succeeded or Failed" May 20 00:17:24.348: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-62661199-e7d4-4605-af7d-4cc7cc579ac1 container client-container: STEP: delete the pod May 20 00:17:24.368: INFO: Waiting for pod downwardapi-volume-62661199-e7d4-4605-af7d-4cc7cc579ac1 to disappear May 20 00:17:24.372: INFO: Pod downwardapi-volume-62661199-e7d4-4605-af7d-4cc7cc579ac1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:17:24.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-358" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":288,"completed":130,"skipped":2080,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:17:24.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:17:24.433: INFO: Creating ReplicaSet my-hostname-basic-e47ea7f6-943f-46c5-96a4-18c0f1af2a3b May 20 00:17:24.439: INFO: Pod name my-hostname-basic-e47ea7f6-943f-46c5-96a4-18c0f1af2a3b: Found 0 pods out of 1 May 20 00:17:29.443: INFO: Pod name my-hostname-basic-e47ea7f6-943f-46c5-96a4-18c0f1af2a3b: Found 1 pods out of 1 May 20 00:17:29.443: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e47ea7f6-943f-46c5-96a4-18c0f1af2a3b" is running May 20 00:17:29.446: INFO: Pod "my-hostname-basic-e47ea7f6-943f-46c5-96a4-18c0f1af2a3b-mdglt" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-20 00:17:24 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-20 00:17:27 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-20 00:17:27 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-20 00:17:24 +0000 UTC Reason: Message:}]) May 20 00:17:29.447: INFO: Trying to dial the pod May 20 00:17:34.458: INFO: Controller my-hostname-basic-e47ea7f6-943f-46c5-96a4-18c0f1af2a3b: Got expected result from replica 1 [my-hostname-basic-e47ea7f6-943f-46c5-96a4-18c0f1af2a3b-mdglt]: "my-hostname-basic-e47ea7f6-943f-46c5-96a4-18c0f1af2a3b-mdglt", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:17:34.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7746" for this suite. • [SLOW TEST:10.087 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":131,"skipped":2091,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:17:34.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:17:40.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7325" for this suite. STEP: Destroying namespace "nsdeletetest-5407" for this suite. May 20 00:17:40.786: INFO: Namespace nsdeletetest-5407 was already deleted STEP: Destroying namespace "nsdeletetest-8407" for this suite. • [SLOW TEST:6.323 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":288,"completed":132,"skipped":2115,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:17:40.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-9834aacd-f021-41f8-bfe8-95e24f699f55 STEP: Creating a pod to test consume configMaps May 20 00:17:40.932: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-64a7db40-1aa3-4ed5-9dc5-f5c1b91a9794" in namespace "projected-6926" to be "Succeeded or Failed" May 20 00:17:40.936: INFO: Pod "pod-projected-configmaps-64a7db40-1aa3-4ed5-9dc5-f5c1b91a9794": Phase="Pending", Reason="", readiness=false. Elapsed: 3.668689ms May 20 00:17:42.942: INFO: Pod "pod-projected-configmaps-64a7db40-1aa3-4ed5-9dc5-f5c1b91a9794": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009519311s May 20 00:17:44.945: INFO: Pod "pod-projected-configmaps-64a7db40-1aa3-4ed5-9dc5-f5c1b91a9794": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012788294s STEP: Saw pod success May 20 00:17:44.945: INFO: Pod "pod-projected-configmaps-64a7db40-1aa3-4ed5-9dc5-f5c1b91a9794" satisfied condition "Succeeded or Failed" May 20 00:17:44.947: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-64a7db40-1aa3-4ed5-9dc5-f5c1b91a9794 container projected-configmap-volume-test: STEP: delete the pod May 20 00:17:44.979: INFO: Waiting for pod pod-projected-configmaps-64a7db40-1aa3-4ed5-9dc5-f5c1b91a9794 to disappear May 20 00:17:44.983: INFO: Pod pod-projected-configmaps-64a7db40-1aa3-4ed5-9dc5-f5c1b91a9794 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:17:44.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6926" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":133,"skipped":2128,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:17:44.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 20 00:17:45.068: INFO: Waiting up to 5m0s for pod "pod-2a89dd84-b0ba-4d1f-9e59-1b5aed1972c0" in namespace "emptydir-3503" to be "Succeeded or Failed" May 20 00:17:45.098: INFO: Pod "pod-2a89dd84-b0ba-4d1f-9e59-1b5aed1972c0": Phase="Pending", Reason="", readiness=false. Elapsed: 29.420225ms May 20 00:17:47.103: INFO: Pod "pod-2a89dd84-b0ba-4d1f-9e59-1b5aed1972c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03434824s May 20 00:17:49.107: INFO: Pod "pod-2a89dd84-b0ba-4d1f-9e59-1b5aed1972c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038605835s STEP: Saw pod success May 20 00:17:49.107: INFO: Pod "pod-2a89dd84-b0ba-4d1f-9e59-1b5aed1972c0" satisfied condition "Succeeded or Failed" May 20 00:17:49.110: INFO: Trying to get logs from node latest-worker pod pod-2a89dd84-b0ba-4d1f-9e59-1b5aed1972c0 container test-container: STEP: delete the pod May 20 00:17:49.239: INFO: Waiting for pod pod-2a89dd84-b0ba-4d1f-9e59-1b5aed1972c0 to disappear May 20 00:17:49.247: INFO: Pod pod-2a89dd84-b0ba-4d1f-9e59-1b5aed1972c0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:17:49.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3503" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":134,"skipped":2128,"failed":0} S ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:17:49.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:17:49.346: INFO: Creating deployment "webserver-deployment" May 20 00:17:49.350: INFO: Waiting for observed generation 1 May 20 00:17:51.565: INFO: Waiting for all required pods to come up May 20 00:17:51.571: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 20 00:18:02.294: INFO: Waiting for deployment "webserver-deployment" to complete May 20 00:18:02.300: INFO: Updating deployment "webserver-deployment" with a non-existent image May 20 00:18:02.305: INFO: Updating deployment webserver-deployment May 20 00:18:02.305: INFO: Waiting for observed generation 2 May 20 00:18:04.451: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 20 00:18:04.454: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 20 00:18:04.471: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 20 00:18:04.477: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 20 00:18:04.477: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 20 00:18:04.479: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 20 00:18:04.482: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 20 00:18:04.482: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 20 00:18:04.487: INFO: Updating deployment webserver-deployment May 20 00:18:04.487: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 20 00:18:04.607: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 20 00:18:04.990: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 20 00:18:08.052: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-2215 /apis/apps/v1/namespaces/deployment-2215/deployments/webserver-deployment ae03ac1b-169f-4019-a17d-5590e5705ee1 6085973 3 2020-05-20 00:17:49 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-20 00:18:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-20 00:18:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001fe1c78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-20 00:18:04 +0000 UTC,LastTransitionTime:2020-05-20 00:18:04 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-20 00:18:06 +0000 UTC,LastTransitionTime:2020-05-20 00:17:49 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 20 00:18:08.056: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-2215 /apis/apps/v1/namespaces/deployment-2215/replicasets/webserver-deployment-6676bcd6d4 fe88c299-b1eb-4ba0-b8c0-a22559e90422 6085963 3 2020-05-20 00:18:02 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment ae03ac1b-169f-4019-a17d-5590e5705ee1 0xc002ede107 0xc002ede108}] [] [{kube-controller-manager Update apps/v1 2020-05-20 00:18:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae03ac1b-169f-4019-a17d-5590e5705ee1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ede188 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 20 00:18:08.056: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 20 00:18:08.056: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-2215 /apis/apps/v1/namespaces/deployment-2215/replicasets/webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 6085958 3 2020-05-20 00:17:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment ae03ac1b-169f-4019-a17d-5590e5705ee1 0xc002ede1e7 0xc002ede1e8}] [] [{kube-controller-manager Update apps/v1 2020-05-20 00:18:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae03ac1b-169f-4019-a17d-5590e5705ee1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ede268 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 20 00:18:08.689: INFO: Pod "webserver-deployment-6676bcd6d4-7vqbt" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-7vqbt webserver-deployment-6676bcd6d4- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-6676bcd6d4-7vqbt 95c61253-dc36-4a55-86d0-c9ab3f5371b0 6085995 0 2020-05-20 00:18:05 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 fe88c299-b1eb-4ba0-b8c0-a22559e90422 0xc002d3e5a7 0xc002d3e5a8}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe88c299-b1eb-4ba0-b8c0-a22559e90422\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-20 00:18:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.690: INFO: Pod "webserver-deployment-6676bcd6d4-8d2bb" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-8d2bb webserver-deployment-6676bcd6d4- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-6676bcd6d4-8d2bb 2be612ab-8432-4fd1-bf00-df4a0de05c1e 6085987 0 2020-05-20 00:18:04 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 fe88c299-b1eb-4ba0-b8c0-a22559e90422 0xc002d3e757 0xc002d3e758}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe88c299-b1eb-4ba0-b8c0-a22559e90422\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-20 00:18:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.690: INFO: Pod "webserver-deployment-6676bcd6d4-cgrls" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-cgrls webserver-deployment-6676bcd6d4- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-6676bcd6d4-cgrls 13d6d852-40b2-43e6-b894-555dc0f468a4 6085869 0 2020-05-20 00:18:02 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 fe88c299-b1eb-4ba0-b8c0-a22559e90422 0xc002d3e907 0xc002d3e908}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe88c299-b1eb-4ba0-b8c0-a22559e90422\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-20 00:18:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.690: INFO: Pod "webserver-deployment-6676bcd6d4-d7jh6" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-d7jh6 webserver-deployment-6676bcd6d4- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-6676bcd6d4-d7jh6 5df29622-6603-450f-8475-810626c2f8ff 6085857 0 2020-05-20 00:18:02 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 fe88c299-b1eb-4ba0-b8c0-a22559e90422 0xc002d3eab7 0xc002d3eab8}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe88c299-b1eb-4ba0-b8c0-a22559e90422\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-20 00:18:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.690: INFO: Pod "webserver-deployment-6676bcd6d4-dcwqp" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-dcwqp webserver-deployment-6676bcd6d4- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-6676bcd6d4-dcwqp 6611027b-0fa6-442c-98ca-1a9e01ce1f7a 6085992 0 2020-05-20 00:18:05 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 fe88c299-b1eb-4ba0-b8c0-a22559e90422 0xc002d3ec67 0xc002d3ec68}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe88c299-b1eb-4ba0-b8c0-a22559e90422\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-20 00:18:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.691: INFO: Pod "webserver-deployment-6676bcd6d4-dvjks" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-dvjks webserver-deployment-6676bcd6d4- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-6676bcd6d4-dvjks 29ada547-0599-4ff5-9001-9f0a9e0eae17 6085881 0 2020-05-20 00:18:02 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 fe88c299-b1eb-4ba0-b8c0-a22559e90422 0xc002d3ee37 0xc002d3ee38}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe88c299-b1eb-4ba0-b8c0-a22559e90422\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-20 00:18:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.691: INFO: Pod "webserver-deployment-6676bcd6d4-h4mhz" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-h4mhz webserver-deployment-6676bcd6d4- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-6676bcd6d4-h4mhz d5bdc0e9-ba0b-47b5-af02-a9364985329b 6085877 0 2020-05-20 00:18:02 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 fe88c299-b1eb-4ba0-b8c0-a22559e90422 0xc002d3efe7 0xc002d3efe8}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe88c299-b1eb-4ba0-b8c0-a22559e90422\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-20 00:18:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.691: INFO: Pod "webserver-deployment-6676bcd6d4-hs4rj" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-hs4rj webserver-deployment-6676bcd6d4- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-6676bcd6d4-hs4rj f66b549c-4c65-405e-b21b-c37e988bd057 6085939 0 2020-05-20 00:18:05 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 fe88c299-b1eb-4ba0-b8c0-a22559e90422 0xc002d3f2b7 0xc002d3f2b8}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe88c299-b1eb-4ba0-b8c0-a22559e90422\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.692: INFO: Pod "webserver-deployment-6676bcd6d4-l745n" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-l745n webserver-deployment-6676bcd6d4- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-6676bcd6d4-l745n 338639c9-70f3-4e61-8de2-e623b945a1ec 6085962 0 2020-05-20 00:18:04 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 fe88c299-b1eb-4ba0-b8c0-a22559e90422 0xc002d3f487 0xc002d3f488}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe88c299-b1eb-4ba0-b8c0-a22559e90422\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-20 00:18:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.692: INFO: Pod "webserver-deployment-6676bcd6d4-l7dr8" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-l7dr8 webserver-deployment-6676bcd6d4- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-6676bcd6d4-l7dr8 c8dfb469-12ff-471f-bcf8-73a853dba7c5 6085856 0 2020-05-20 00:18:02 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 fe88c299-b1eb-4ba0-b8c0-a22559e90422 0xc002d3f657 0xc002d3f658}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe88c299-b1eb-4ba0-b8c0-a22559e90422\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-20 00:18:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.692: INFO: Pod "webserver-deployment-6676bcd6d4-lh8q7" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-lh8q7 webserver-deployment-6676bcd6d4- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-6676bcd6d4-lh8q7 8158c8f5-a058-469c-a7cc-9a1cee9e6f41 6085959 0 2020-05-20 00:18:05 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 fe88c299-b1eb-4ba0-b8c0-a22559e90422 0xc002d3f817 0xc002d3f818}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe88c299-b1eb-4ba0-b8c0-a22559e90422\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.692: INFO: Pod "webserver-deployment-6676bcd6d4-mvzsv" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-mvzsv webserver-deployment-6676bcd6d4- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-6676bcd6d4-mvzsv 830d8f81-89b9-4189-a204-d6a97cb0cfc0 6085968 0 2020-05-20 00:18:04 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 fe88c299-b1eb-4ba0-b8c0-a22559e90422 0xc002d3f957 0xc002d3f958}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe88c299-b1eb-4ba0-b8c0-a22559e90422\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-20 00:18:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.692: INFO: Pod "webserver-deployment-6676bcd6d4-wj8t4" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-wj8t4 webserver-deployment-6676bcd6d4- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-6676bcd6d4-wj8t4 7414fc85-bb27-4cf2-ab05-7ddc03fd6809 6086000 0 2020-05-20 00:18:05 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 fe88c299-b1eb-4ba0-b8c0-a22559e90422 0xc002d3fb07 0xc002d3fb08}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe88c299-b1eb-4ba0-b8c0-a22559e90422\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-20 00:18:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.693: INFO: Pod "webserver-deployment-84855cf797-5fljd" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5fljd webserver-deployment-84855cf797- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-84855cf797-5fljd 7fbce730-8308-4f08-ab85-736d15d492e3 6085803 0 2020-05-20 00:17:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 0xc002d3fcb7 0xc002d3fcb8}] [] [{kube-controller-manager Update v1 2020-05-20 00:17:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72cfd67c-8d18-49db-84fe-654d71df7cbb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.151\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.151,StartTime:2020-05-20 00:17:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 00:17:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7401b6137cd7ffcbc666f169d143c558d8e2cc9956df53d662d834460df7a450,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.151,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.693: INFO: Pod "webserver-deployment-84855cf797-7lvsb" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-7lvsb webserver-deployment-84855cf797- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-84855cf797-7lvsb 0fe4ddac-2721-4d1e-8e57-bfaf73879083 6085989 0 2020-05-20 00:18:04 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 0xc002d3fe67 0xc002d3fe68}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72cfd67c-8d18-49db-84fe-654d71df7cbb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-20 00:18:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.693: INFO: Pod "webserver-deployment-84855cf797-89zkk" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-89zkk webserver-deployment-84855cf797- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-84855cf797-89zkk 9a5f20b6-456f-4395-91db-0f036191caaf 6085978 0 2020-05-20 00:18:04 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 0xc002d3fff7 0xc002d3fff8}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72cfd67c-8d18-49db-84fe-654d71df7cbb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-20 00:18:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.694: INFO: Pod "webserver-deployment-84855cf797-8qggn" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-8qggn webserver-deployment-84855cf797- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-84855cf797-8qggn cb37ffdf-3b0e-40d9-8fde-cec564c4e55d 6085941 0 2020-05-20 00:18:05 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 0xc003f5a187 0xc003f5a188}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72cfd67c-8d18-49db-84fe-654d71df7cbb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.694: INFO: Pod "webserver-deployment-84855cf797-bvfbn" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-bvfbn webserver-deployment-84855cf797- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-84855cf797-bvfbn 31f86806-d69e-4ef3-98c9-a161d60dac6f 6085942 0 2020-05-20 00:18:05 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 0xc003f5a2b7 0xc003f5a2b8}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72cfd67c-8d18-49db-84fe-654d71df7cbb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.694: INFO: Pod "webserver-deployment-84855cf797-c5km5" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-c5km5 webserver-deployment-84855cf797- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-84855cf797-c5km5 47a0ced3-d3dd-4759-b299-e95589016559 6085977 0 2020-05-20 00:18:04 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 0xc003f5a3e7 0xc003f5a3e8}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72cfd67c-8d18-49db-84fe-654d71df7cbb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-20 00:18:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.694: INFO: Pod "webserver-deployment-84855cf797-dvktl" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-dvktl webserver-deployment-84855cf797- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-84855cf797-dvktl 26f02151-5b6c-431d-ba4c-f0681b757997 6085953 0 2020-05-20 00:18:04 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 0xc003f5a577 0xc003f5a578}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72cfd67c-8d18-49db-84fe-654d71df7cbb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-20 00:18:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.694: INFO: Pod "webserver-deployment-84855cf797-g2624" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-g2624 webserver-deployment-84855cf797- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-84855cf797-g2624 45ad2804-170e-44a5-86b2-d62e49cb7dc8 6085967 0 2020-05-20 00:18:04 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 0xc003f5a707 0xc003f5a708}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72cfd67c-8d18-49db-84fe-654d71df7cbb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-20 00:18:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.695: INFO: Pod "webserver-deployment-84855cf797-hdv9x" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-hdv9x webserver-deployment-84855cf797- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-84855cf797-hdv9x 9840bd56-7ef5-4b92-b9e2-32d232686a5f 6085815 0 2020-05-20 00:17:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 0xc003f5a8a7 0xc003f5a8a8}] [] [{kube-controller-manager Update v1 2020-05-20 00:17:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72cfd67c-8d18-49db-84fe-654d71df7cbb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.147\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.147,StartTime:2020-05-20 00:17:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 00:18:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5553d8d27fd32bbce3f59d6a6bf3272eeae8e49ad0c72dcb4cd4a126007459ed,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.147,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.695: INFO: Pod "webserver-deployment-84855cf797-hq4nn" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-hq4nn webserver-deployment-84855cf797- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-84855cf797-hq4nn b5fc1d2c-841b-4c80-b9c4-5691de2b9fc4 6085812 0 2020-05-20 00:17:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 0xc003f5aa57 0xc003f5aa58}] [] [{kube-controller-manager Update v1 2020-05-20 00:17:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72cfd67c-8d18-49db-84fe-654d71df7cbb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.146\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.146,StartTime:2020-05-20 00:17:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 00:17:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://23394ff509021835653e9150155df423a2558ffeb05a206ee25283aecbdff5b9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.146,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.695: INFO: Pod "webserver-deployment-84855cf797-kr4fc" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-kr4fc webserver-deployment-84855cf797- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-84855cf797-kr4fc 224c274a-0cfd-4447-94ed-30831ad45c23 6085769 0 2020-05-20 00:17:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 0xc003f5ac07 0xc003f5ac08}] [] [{kube-controller-manager Update v1 2020-05-20 00:17:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72cfd67c-8d18-49db-84fe-654d71df7cbb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:17:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.150\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.150,StartTime:2020-05-20 00:17:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 00:17:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d23a8f71aeb772c077462d3a264a8046a7d56208add62734b11ba1e72668b4cd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.150,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.695: INFO: Pod "webserver-deployment-84855cf797-lncbd" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-lncbd webserver-deployment-84855cf797- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-84855cf797-lncbd d5713ab2-bc7f-4c03-8a7c-d6466256edd0 6085986 0 2020-05-20 00:18:04 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 0xc003f5adb7 0xc003f5adb8}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72cfd67c-8d18-49db-84fe-654d71df7cbb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-20 00:18:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.695: INFO: Pod "webserver-deployment-84855cf797-m9gs2" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-m9gs2 webserver-deployment-84855cf797- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-84855cf797-m9gs2 5335aa70-6c90-4524-9a0b-650418dc886f 6085944 0 2020-05-20 00:18:05 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 0xc003f5af57 0xc003f5af58}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72cfd67c-8d18-49db-84fe-654d71df7cbb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.696: INFO: Pod "webserver-deployment-84855cf797-tf8tw" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-tf8tw webserver-deployment-84855cf797- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-84855cf797-tf8tw 258442cc-18c1-4e6b-8dcd-bee3cb0f2010 6085799 0 2020-05-20 00:17:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 0xc003f5b087 0xc003f5b088}] [] [{kube-controller-manager Update v1 2020-05-20 00:17:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72cfd67c-8d18-49db-84fe-654d71df7cbb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.153\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.153,StartTime:2020-05-20 00:17:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 00:18:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2da189c111db96d88fb68042cc9b4ddebeb63481e91b42b921db3820841ef74b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.153,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.696: INFO: Pod "webserver-deployment-84855cf797-vz6n9" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-vz6n9 webserver-deployment-84855cf797- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-84855cf797-vz6n9 ec603519-f440-4802-8ed7-0ef4962822b7 6085788 0 2020-05-20 00:17:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 0xc003f5b237 0xc003f5b238}] [] [{kube-controller-manager Update v1 2020-05-20 00:17:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72cfd67c-8d18-49db-84fe-654d71df7cbb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:17:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.145\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.145,StartTime:2020-05-20 00:17:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 00:17:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e5905b2477d19ece3f5bbf1830b572fa11db145499243c62ba2f48a246287bd3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.145,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.696: INFO: Pod "webserver-deployment-84855cf797-wlfh4" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-wlfh4 webserver-deployment-84855cf797- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-84855cf797-wlfh4 181ee470-a6e6-40d1-824b-48d84c3ed29c 6085943 0 2020-05-20 00:18:05 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 0xc003f5b3e7 0xc003f5b3e8}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72cfd67c-8d18-49db-84fe-654d71df7cbb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.696: INFO: Pod "webserver-deployment-84855cf797-xs7pg" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-xs7pg webserver-deployment-84855cf797- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-84855cf797-xs7pg df3988ab-8d30-4b1d-b408-0419c97ab024 6086001 0 2020-05-20 00:18:05 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 0xc003f5b517 0xc003f5b518}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72cfd67c-8d18-49db-84fe-654d71df7cbb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-20 00:18:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.696: INFO: Pod "webserver-deployment-84855cf797-xtjhc" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-xtjhc webserver-deployment-84855cf797- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-84855cf797-xtjhc 1dbca9dc-60c1-4663-bb56-3fc663a2f28c 6085752 0 2020-05-20 00:17:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 0xc003f5b6a7 0xc003f5b6a8}] [] [{kube-controller-manager Update v1 2020-05-20 00:17:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72cfd67c-8d18-49db-84fe-654d71df7cbb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:17:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.143\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.143,StartTime:2020-05-20 00:17:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 00:17:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9b2dfd19c233dad2ba4909f76c7a8b491d27cc9a0b664cc8ca390d8c85cd8f56,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.143,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.697: INFO: Pod "webserver-deployment-84855cf797-zjlqg" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-zjlqg webserver-deployment-84855cf797- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-84855cf797-zjlqg 49883c72-a52a-4ce4-9ccb-37451506b6f0 6085794 0 2020-05-20 00:17:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 0xc003f5b857 0xc003f5b858}] [] [{kube-controller-manager Update v1 2020-05-20 00:17:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72cfd67c-8d18-49db-84fe-654d71df7cbb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.144\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:17:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.144,StartTime:2020-05-20 00:17:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 00:17:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b9c4abff82757e7cdd59d4fee88970276d17e6a392c4d1896671144ff530c268,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.144,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:18:08.697: INFO: Pod "webserver-deployment-84855cf797-zp8pb" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-zp8pb webserver-deployment-84855cf797- deployment-2215 /api/v1/namespaces/deployment-2215/pods/webserver-deployment-84855cf797-zp8pb 8838235e-d914-419b-b908-d4db59e27087 6085990 0 2020-05-20 00:18:04 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 72cfd67c-8d18-49db-84fe-654d71df7cbb 0xc003f5ba27 0xc003f5ba28}] [] [{kube-controller-manager Update v1 2020-05-20 00:18:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72cfd67c-8d18-49db-84fe-654d71df7cbb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:18:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xblz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xblz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xblz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:18:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-20 00:18:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:18:08.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2215" for this suite. • [SLOW TEST:20.338 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":288,"completed":135,"skipped":2129,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:18:09.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:18:26.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8336" for this suite. • [SLOW TEST:17.684 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":288,"completed":136,"skipped":2158,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:18:27.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-ea70da0f-584b-4cfd-8e99-600768bf8f7b STEP: Creating a pod to test consume secrets May 20 00:18:28.397: INFO: Waiting up to 5m0s for pod "pod-secrets-2b83505b-042e-453a-952b-8942e91410c7" in namespace "secrets-7996" to be "Succeeded or Failed" May 20 00:18:28.484: INFO: Pod "pod-secrets-2b83505b-042e-453a-952b-8942e91410c7": Phase="Pending", Reason="", readiness=false. Elapsed: 87.046904ms May 20 00:18:30.539: INFO: Pod "pod-secrets-2b83505b-042e-453a-952b-8942e91410c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141531436s May 20 00:18:32.641: INFO: Pod "pod-secrets-2b83505b-042e-453a-952b-8942e91410c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243685583s May 20 00:18:34.880: INFO: Pod "pod-secrets-2b83505b-042e-453a-952b-8942e91410c7": Phase="Running", Reason="", readiness=true. Elapsed: 6.4829993s May 20 00:18:36.948: INFO: Pod "pod-secrets-2b83505b-042e-453a-952b-8942e91410c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.55068239s STEP: Saw pod success May 20 00:18:36.948: INFO: Pod "pod-secrets-2b83505b-042e-453a-952b-8942e91410c7" satisfied condition "Succeeded or Failed" May 20 00:18:37.241: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-2b83505b-042e-453a-952b-8942e91410c7 container secret-volume-test: STEP: delete the pod May 20 00:18:37.561: INFO: Waiting for pod pod-secrets-2b83505b-042e-453a-952b-8942e91410c7 to disappear May 20 00:18:37.576: INFO: Pod pod-secrets-2b83505b-042e-453a-952b-8942e91410c7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:18:37.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7996" for this suite. • [SLOW TEST:10.309 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":137,"skipped":2197,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:18:37.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 20 00:18:38.054: INFO: Waiting up to 5m0s for pod "pod-488ba78e-3130-42d4-a2ec-9d1c2ebd230b" in namespace "emptydir-2605" to be "Succeeded or Failed" May 20 00:18:38.060: INFO: Pod "pod-488ba78e-3130-42d4-a2ec-9d1c2ebd230b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.440285ms May 20 00:18:40.063: INFO: Pod "pod-488ba78e-3130-42d4-a2ec-9d1c2ebd230b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008803143s May 20 00:18:42.067: INFO: Pod "pod-488ba78e-3130-42d4-a2ec-9d1c2ebd230b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012465972s STEP: Saw pod success May 20 00:18:42.067: INFO: Pod "pod-488ba78e-3130-42d4-a2ec-9d1c2ebd230b" satisfied condition "Succeeded or Failed" May 20 00:18:42.070: INFO: Trying to get logs from node latest-worker2 pod pod-488ba78e-3130-42d4-a2ec-9d1c2ebd230b container test-container: STEP: delete the pod May 20 00:18:42.126: INFO: Waiting for pod pod-488ba78e-3130-42d4-a2ec-9d1c2ebd230b to disappear May 20 00:18:42.137: INFO: Pod pod-488ba78e-3130-42d4-a2ec-9d1c2ebd230b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:18:42.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2605" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":138,"skipped":2238,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:18:42.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-746/configmap-test-b243b21f-9e5d-4ba1-8e16-22b891e71483 STEP: Creating a pod to test consume configMaps May 20 00:18:42.620: INFO: Waiting up to 5m0s for pod "pod-configmaps-698defb2-400e-448a-9d3d-be40241b30e9" in namespace "configmap-746" to be "Succeeded or Failed" May 20 00:18:42.678: INFO: Pod "pod-configmaps-698defb2-400e-448a-9d3d-be40241b30e9": Phase="Pending", Reason="", readiness=false. Elapsed: 58.677179ms May 20 00:18:44.683: INFO: Pod "pod-configmaps-698defb2-400e-448a-9d3d-be40241b30e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063215833s May 20 00:18:46.779: INFO: Pod "pod-configmaps-698defb2-400e-448a-9d3d-be40241b30e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.15934951s STEP: Saw pod success May 20 00:18:46.779: INFO: Pod "pod-configmaps-698defb2-400e-448a-9d3d-be40241b30e9" satisfied condition "Succeeded or Failed" May 20 00:18:46.782: INFO: Trying to get logs from node latest-worker pod pod-configmaps-698defb2-400e-448a-9d3d-be40241b30e9 container env-test: STEP: delete the pod May 20 00:18:46.866: INFO: Waiting for pod pod-configmaps-698defb2-400e-448a-9d3d-be40241b30e9 to disappear May 20 00:18:46.911: INFO: Pod pod-configmaps-698defb2-400e-448a-9d3d-be40241b30e9 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:18:46.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-746" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":139,"skipped":2251,"failed":0} S ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:18:46.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 20 00:18:47.019: INFO: Pod name pod-release: Found 0 pods out of 1 May 20 00:18:52.044: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:18:53.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1174" for this suite. • [SLOW TEST:6.149 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":288,"completed":140,"skipped":2252,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:18:53.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:18:53.159: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:18:54.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3584" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":288,"completed":141,"skipped":2257,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:18:54.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:18:54.751: INFO: Waiting up to 5m0s for pod "busybox-user-65534-be3a3506-0f2f-47a0-aaf4-bc88c4c4cd53" in namespace "security-context-test-6154" to be "Succeeded or Failed" May 20 00:18:54.769: INFO: Pod "busybox-user-65534-be3a3506-0f2f-47a0-aaf4-bc88c4c4cd53": Phase="Pending", Reason="", readiness=false. Elapsed: 18.228716ms May 20 00:18:56.773: INFO: Pod "busybox-user-65534-be3a3506-0f2f-47a0-aaf4-bc88c4c4cd53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021691465s May 20 00:18:58.949: INFO: Pod "busybox-user-65534-be3a3506-0f2f-47a0-aaf4-bc88c4c4cd53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.197789672s May 20 00:18:58.949: INFO: Pod "busybox-user-65534-be3a3506-0f2f-47a0-aaf4-bc88c4c4cd53" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:18:58.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6154" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":142,"skipped":2285,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:18:58.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command May 20 00:18:59.473: INFO: Waiting up to 5m0s for pod "var-expansion-a9f4eff1-31cf-4a68-823f-6596b6331beb" in namespace "var-expansion-3355" to be "Succeeded or Failed" May 20 00:18:59.522: INFO: Pod "var-expansion-a9f4eff1-31cf-4a68-823f-6596b6331beb": Phase="Pending", Reason="", readiness=false. Elapsed: 49.316225ms May 20 00:19:02.056: INFO: Pod "var-expansion-a9f4eff1-31cf-4a68-823f-6596b6331beb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.583450311s May 20 00:19:04.111: INFO: Pod "var-expansion-a9f4eff1-31cf-4a68-823f-6596b6331beb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.638034952s STEP: Saw pod success May 20 00:19:04.111: INFO: Pod "var-expansion-a9f4eff1-31cf-4a68-823f-6596b6331beb" satisfied condition "Succeeded or Failed" May 20 00:19:04.133: INFO: Trying to get logs from node latest-worker2 pod var-expansion-a9f4eff1-31cf-4a68-823f-6596b6331beb container dapi-container: STEP: delete the pod May 20 00:19:04.681: INFO: Waiting for pod var-expansion-a9f4eff1-31cf-4a68-823f-6596b6331beb to disappear May 20 00:19:04.743: INFO: Pod var-expansion-a9f4eff1-31cf-4a68-823f-6596b6331beb no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:19:04.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3355" for this suite. • [SLOW TEST:5.761 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":288,"completed":143,"skipped":2287,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:19:04.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8997 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8997 STEP: creating replication controller externalsvc in namespace services-8997 I0520 00:19:05.509719 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-8997, replica count: 2 I0520 00:19:08.560055 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 00:19:11.560320 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 20 00:19:11.594: INFO: Creating new exec pod May 20 00:19:15.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8997 execpodmdsbq -- /bin/sh -x -c nslookup clusterip-service' May 20 00:19:15.904: INFO: stderr: "I0520 00:19:15.777095 2283 log.go:172] (0xc000aaf970) (0xc000c14460) Create stream\nI0520 00:19:15.777295 2283 log.go:172] (0xc000aaf970) (0xc000c14460) Stream added, broadcasting: 1\nI0520 00:19:15.779945 2283 log.go:172] (0xc000aaf970) Reply frame received for 1\nI0520 00:19:15.779972 2283 log.go:172] (0xc000aaf970) (0xc000382820) Create stream\nI0520 00:19:15.779983 2283 log.go:172] (0xc000aaf970) (0xc000382820) Stream added, broadcasting: 3\nI0520 00:19:15.780900 2283 log.go:172] (0xc000aaf970) Reply frame received for 3\nI0520 00:19:15.780935 2283 log.go:172] (0xc000aaf970) (0xc0003834a0) Create stream\nI0520 00:19:15.780942 2283 log.go:172] (0xc000aaf970) (0xc0003834a0) Stream added, broadcasting: 5\nI0520 00:19:15.781942 2283 log.go:172] (0xc000aaf970) Reply frame received for 5\nI0520 00:19:15.855312 2283 log.go:172] (0xc000aaf970) Data frame received for 5\nI0520 00:19:15.855342 2283 log.go:172] (0xc0003834a0) (5) Data frame handling\nI0520 00:19:15.855360 2283 log.go:172] (0xc0003834a0) (5) Data frame sent\n+ nslookup clusterip-service\nI0520 00:19:15.894239 2283 log.go:172] (0xc000aaf970) Data frame received for 3\nI0520 00:19:15.894261 2283 log.go:172] (0xc000382820) (3) Data frame handling\nI0520 00:19:15.894273 2283 log.go:172] (0xc000382820) (3) Data frame sent\nI0520 00:19:15.895903 2283 log.go:172] (0xc000aaf970) Data frame received for 3\nI0520 00:19:15.895927 2283 log.go:172] (0xc000382820) (3) Data frame handling\nI0520 00:19:15.895945 2283 log.go:172] (0xc000382820) (3) Data frame sent\nI0520 00:19:15.896598 2283 log.go:172] (0xc000aaf970) Data frame received for 5\nI0520 00:19:15.896614 2283 log.go:172] (0xc0003834a0) (5) Data frame handling\nI0520 00:19:15.896656 2283 log.go:172] (0xc000aaf970) Data frame received for 3\nI0520 00:19:15.896714 2283 log.go:172] (0xc000382820) (3) Data frame handling\nI0520 00:19:15.899046 2283 log.go:172] (0xc000aaf970) Data frame received for 1\nI0520 00:19:15.899063 2283 log.go:172] (0xc000c14460) (1) Data frame handling\nI0520 00:19:15.899081 2283 log.go:172] (0xc000c14460) (1) Data frame sent\nI0520 00:19:15.899095 2283 log.go:172] (0xc000aaf970) (0xc000c14460) Stream removed, broadcasting: 1\nI0520 00:19:15.899138 2283 log.go:172] (0xc000aaf970) Go away received\nI0520 00:19:15.899412 2283 log.go:172] (0xc000aaf970) (0xc000c14460) Stream removed, broadcasting: 1\nI0520 00:19:15.899426 2283 log.go:172] (0xc000aaf970) (0xc000382820) Stream removed, broadcasting: 3\nI0520 00:19:15.899433 2283 log.go:172] (0xc000aaf970) (0xc0003834a0) Stream removed, broadcasting: 5\n" May 20 00:19:15.904: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-8997.svc.cluster.local\tcanonical name = externalsvc.services-8997.svc.cluster.local.\nName:\texternalsvc.services-8997.svc.cluster.local\nAddress: 10.96.247.231\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8997, will wait for the garbage collector to delete the pods May 20 00:19:15.965: INFO: Deleting ReplicationController externalsvc took: 7.613121ms May 20 00:19:16.266: INFO: Terminating ReplicationController externalsvc pods took: 300.250706ms May 20 00:19:25.486: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:19:25.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8997" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:20.816 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":288,"completed":144,"skipped":2315,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:19:25.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:19:25.708: INFO: The status of Pod test-webserver-6684ab95-cb0d-4661-9e91-f5bcdf6a6c7b is Pending, waiting for it to be Running (with Ready = true) May 20 00:19:27.712: INFO: The status of Pod test-webserver-6684ab95-cb0d-4661-9e91-f5bcdf6a6c7b is Pending, waiting for it to be Running (with Ready = true) May 20 00:19:29.712: INFO: The status of Pod test-webserver-6684ab95-cb0d-4661-9e91-f5bcdf6a6c7b is Running (Ready = false) May 20 00:19:31.712: INFO: The status of Pod test-webserver-6684ab95-cb0d-4661-9e91-f5bcdf6a6c7b is Running (Ready = false) May 20 00:19:33.712: INFO: The status of Pod test-webserver-6684ab95-cb0d-4661-9e91-f5bcdf6a6c7b is Running (Ready = false) May 20 00:19:35.712: INFO: The status of Pod test-webserver-6684ab95-cb0d-4661-9e91-f5bcdf6a6c7b is Running (Ready = false) May 20 00:19:37.712: INFO: The status of Pod test-webserver-6684ab95-cb0d-4661-9e91-f5bcdf6a6c7b is Running (Ready = false) May 20 00:19:39.713: INFO: The status of Pod test-webserver-6684ab95-cb0d-4661-9e91-f5bcdf6a6c7b is Running (Ready = false) May 20 00:19:41.713: INFO: The status of Pod test-webserver-6684ab95-cb0d-4661-9e91-f5bcdf6a6c7b is Running (Ready = false) May 20 00:19:43.712: INFO: The status of Pod test-webserver-6684ab95-cb0d-4661-9e91-f5bcdf6a6c7b is Running (Ready = false) May 20 00:19:45.713: INFO: The status of Pod test-webserver-6684ab95-cb0d-4661-9e91-f5bcdf6a6c7b is Running (Ready = false) May 20 00:19:47.713: INFO: The status of Pod test-webserver-6684ab95-cb0d-4661-9e91-f5bcdf6a6c7b is Running (Ready = false) May 20 00:19:49.712: INFO: The status of Pod test-webserver-6684ab95-cb0d-4661-9e91-f5bcdf6a6c7b is Running (Ready = true) May 20 00:19:49.714: INFO: Container started at 2020-05-20 00:19:28 +0000 UTC, pod became ready at 2020-05-20 00:19:49 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:19:49.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8352" for this suite. • [SLOW TEST:24.148 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":288,"completed":145,"skipped":2354,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:19:49.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 20 00:19:49.763: INFO: >>> kubeConfig: /root/.kube/config May 20 00:19:52.734: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:20:03.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2125" for this suite. • [SLOW TEST:13.734 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":288,"completed":146,"skipped":2370,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:20:03.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 20 00:20:11.574: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 00:20:11.584: INFO: Pod pod-with-poststart-exec-hook still exists May 20 00:20:13.584: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 00:20:13.589: INFO: Pod pod-with-poststart-exec-hook still exists May 20 00:20:15.584: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 00:20:15.589: INFO: Pod pod-with-poststart-exec-hook still exists May 20 00:20:17.584: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 00:20:17.588: INFO: Pod pod-with-poststart-exec-hook still exists May 20 00:20:19.584: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 00:20:19.588: INFO: Pod pod-with-poststart-exec-hook still exists May 20 00:20:21.584: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 00:20:21.589: INFO: Pod pod-with-poststart-exec-hook still exists May 20 00:20:23.584: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 00:20:23.589: INFO: Pod pod-with-poststart-exec-hook still exists May 20 00:20:25.584: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 00:20:25.589: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:20:25.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9674" for this suite. • [SLOW TEST:22.142 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":288,"completed":147,"skipped":2372,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:20:25.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 20 00:20:29.745: INFO: &Pod{ObjectMeta:{send-events-2fd8b4f4-28a0-40fc-945a-ad6061301ea8 events-808 /api/v1/namespaces/events-808/pods/send-events-2fd8b4f4-28a0-40fc-945a-ad6061301ea8 83e1b5dd-1274-4226-a38d-0ce2d53f3159 6087113 0 2020-05-20 00:20:25 +0000 UTC map[name:foo time:702969698] map[] [] [] [{e2e.test Update v1 2020-05-20 00:20:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:20:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.165\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-47qmd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-47qmd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-47qmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:20:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:20:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:20:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:20:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.165,StartTime:2020-05-20 00:20:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 00:20:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://066b54d7bef21802414ae689e5fb560b14fade6162fa4ca32521ec60c0886e29,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.165,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 20 00:20:31.750: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 20 00:20:33.755: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:20:33.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-808" for this suite. • [SLOW TEST:8.206 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":288,"completed":148,"skipped":2382,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:20:33.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 00:20:34.621: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 00:20:36.686: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725530834, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725530834, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725530834, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725530834, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 00:20:38.692: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725530834, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725530834, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725530834, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725530834, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 00:20:41.736: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 20 00:20:41.756: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:20:41.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6127" for this suite. STEP: Destroying namespace "webhook-6127-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.087 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":288,"completed":149,"skipped":2387,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:20:41.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:20:41.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 20 00:20:42.623: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-20T00:20:42Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-20T00:20:42Z]] name:name1 resourceVersion:6087239 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6294300e-9ae3-477e-9b08-d5028571ce55] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 20 00:20:52.629: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-20T00:20:52Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-20T00:20:52Z]] name:name2 resourceVersion:6087283 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:0fe3097f-afc0-420e-a562-66904b6b6717] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 20 00:21:02.637: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-20T00:20:42Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-20T00:21:02Z]] name:name1 resourceVersion:6087313 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6294300e-9ae3-477e-9b08-d5028571ce55] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 20 00:21:12.644: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-20T00:20:52Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-20T00:21:12Z]] name:name2 resourceVersion:6087345 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:0fe3097f-afc0-420e-a562-66904b6b6717] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 20 00:21:22.653: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-20T00:20:42Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-20T00:21:02Z]] name:name1 resourceVersion:6087379 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6294300e-9ae3-477e-9b08-d5028571ce55] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 20 00:21:32.662: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-20T00:20:52Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-20T00:21:12Z]] name:name2 resourceVersion:6087409 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:0fe3097f-afc0-420e-a562-66904b6b6717] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:21:43.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5244" for this suite. • [SLOW TEST:61.289 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":288,"completed":150,"skipped":2389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:21:43.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy May 20 00:21:43.229: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix132522869/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:21:43.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7841" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":288,"completed":151,"skipped":2428,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:21:43.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 00:21:44.138: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 00:21:46.349: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725530904, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725530904, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725530904, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725530904, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 00:21:49.378: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:21:49.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9066-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:21:50.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3416" for this suite. STEP: Destroying namespace "webhook-3416-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.458 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":288,"completed":152,"skipped":2442,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:21:50.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:21:54.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6325" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":288,"completed":153,"skipped":2481,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:21:54.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 20 00:22:01.003: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3326 PodName:pod-sharedvolume-ae9b4399-22b3-409e-9e69-b93bf3dfedc7 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 00:22:01.003: INFO: >>> kubeConfig: /root/.kube/config I0520 00:22:01.044886 7 log.go:172] (0xc002fe8420) (0xc0026637c0) Create stream I0520 00:22:01.044926 7 log.go:172] (0xc002fe8420) (0xc0026637c0) Stream added, broadcasting: 1 I0520 00:22:01.048979 7 log.go:172] (0xc002fe8420) Reply frame received for 1 I0520 00:22:01.049012 7 log.go:172] (0xc002fe8420) (0xc0019120a0) Create stream I0520 00:22:01.049022 7 log.go:172] (0xc002fe8420) (0xc0019120a0) Stream added, broadcasting: 3 I0520 00:22:01.049977 7 log.go:172] (0xc002fe8420) Reply frame received for 3 I0520 00:22:01.050000 7 log.go:172] (0xc002fe8420) (0xc0019121e0) Create stream I0520 00:22:01.050009 7 log.go:172] (0xc002fe8420) (0xc0019121e0) Stream added, broadcasting: 5 I0520 00:22:01.050601 7 log.go:172] (0xc002fe8420) Reply frame received for 5 I0520 00:22:01.136992 7 log.go:172] (0xc002fe8420) Data frame received for 5 I0520 00:22:01.137052 7 log.go:172] (0xc0019121e0) (5) Data frame handling I0520 00:22:01.137091 7 log.go:172] (0xc002fe8420) Data frame received for 3 I0520 00:22:01.137312 7 log.go:172] (0xc0019120a0) (3) Data frame handling I0520 00:22:01.137352 7 log.go:172] (0xc0019120a0) (3) Data frame sent I0520 00:22:01.137373 7 log.go:172] (0xc002fe8420) Data frame received for 3 I0520 00:22:01.137392 7 log.go:172] (0xc0019120a0) (3) Data frame handling I0520 00:22:01.138750 7 log.go:172] (0xc002fe8420) Data frame received for 1 I0520 00:22:01.138780 7 log.go:172] (0xc0026637c0) (1) Data frame handling I0520 00:22:01.138810 7 log.go:172] (0xc0026637c0) (1) Data frame sent I0520 00:22:01.138832 7 log.go:172] (0xc002fe8420) (0xc0026637c0) Stream removed, broadcasting: 1 I0520 00:22:01.138959 7 log.go:172] (0xc002fe8420) (0xc0026637c0) Stream removed, broadcasting: 1 I0520 00:22:01.138980 7 log.go:172] (0xc002fe8420) (0xc0019120a0) Stream removed, broadcasting: 3 I0520 00:22:01.138991 7 log.go:172] (0xc002fe8420) (0xc0019121e0) Stream removed, broadcasting: 5 May 20 00:22:01.139: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 I0520 00:22:01.139071 7 log.go:172] (0xc002fe8420) Go away received May 20 00:22:01.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3326" for this suite. • [SLOW TEST:6.262 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":288,"completed":154,"skipped":2486,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:22:01.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 20 00:22:01.235: INFO: Waiting up to 5m0s for pod "downwardapi-volume-602ffa9b-b2bb-4006-88cc-f0f4142a94da" in namespace "projected-9650" to be "Succeeded or Failed" May 20 00:22:01.272: INFO: Pod "downwardapi-volume-602ffa9b-b2bb-4006-88cc-f0f4142a94da": Phase="Pending", Reason="", readiness=false. Elapsed: 36.661187ms May 20 00:22:03.276: INFO: Pod "downwardapi-volume-602ffa9b-b2bb-4006-88cc-f0f4142a94da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041522613s May 20 00:22:05.282: INFO: Pod "downwardapi-volume-602ffa9b-b2bb-4006-88cc-f0f4142a94da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046619331s STEP: Saw pod success May 20 00:22:05.282: INFO: Pod "downwardapi-volume-602ffa9b-b2bb-4006-88cc-f0f4142a94da" satisfied condition "Succeeded or Failed" May 20 00:22:05.284: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-602ffa9b-b2bb-4006-88cc-f0f4142a94da container client-container: STEP: delete the pod May 20 00:22:05.323: INFO: Waiting for pod downwardapi-volume-602ffa9b-b2bb-4006-88cc-f0f4142a94da to disappear May 20 00:22:05.380: INFO: Pod downwardapi-volume-602ffa9b-b2bb-4006-88cc-f0f4142a94da no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:22:05.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9650" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":155,"skipped":2487,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:22:05.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 20 00:22:05.464: INFO: Waiting up to 5m0s for pod "downward-api-f496bfec-0397-4a9c-b849-0b3be2d4a259" in namespace "downward-api-6454" to be "Succeeded or Failed" May 20 00:22:05.476: INFO: Pod "downward-api-f496bfec-0397-4a9c-b849-0b3be2d4a259": Phase="Pending", Reason="", readiness=false. Elapsed: 11.972411ms May 20 00:22:07.479: INFO: Pod "downward-api-f496bfec-0397-4a9c-b849-0b3be2d4a259": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015454798s May 20 00:22:09.483: INFO: Pod "downward-api-f496bfec-0397-4a9c-b849-0b3be2d4a259": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019627547s STEP: Saw pod success May 20 00:22:09.483: INFO: Pod "downward-api-f496bfec-0397-4a9c-b849-0b3be2d4a259" satisfied condition "Succeeded or Failed" May 20 00:22:09.486: INFO: Trying to get logs from node latest-worker pod downward-api-f496bfec-0397-4a9c-b849-0b3be2d4a259 container dapi-container: STEP: delete the pod May 20 00:22:09.547: INFO: Waiting for pod downward-api-f496bfec-0397-4a9c-b849-0b3be2d4a259 to disappear May 20 00:22:09.595: INFO: Pod downward-api-f496bfec-0397-4a9c-b849-0b3be2d4a259 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:22:09.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6454" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":288,"completed":156,"skipped":2511,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:22:09.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:22:13.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7766" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":288,"completed":157,"skipped":2519,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:22:13.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:24:14.433: INFO: Deleting pod "var-expansion-40adaaab-b82d-4124-8562-f71f6fb40e7b" in namespace "var-expansion-8311" May 20 00:24:14.438: INFO: Wait up to 5m0s for pod "var-expansion-40adaaab-b82d-4124-8562-f71f6fb40e7b" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:24:16.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8311" for this suite. • [SLOW TEST:122.588 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":288,"completed":158,"skipped":2538,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:24:16.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9353 STEP: creating service affinity-clusterip-transition in namespace services-9353 STEP: creating replication controller affinity-clusterip-transition in namespace services-9353 I0520 00:24:16.603770 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-9353, replica count: 3 I0520 00:24:19.654195 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 00:24:22.654461 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 00:24:22.661: INFO: Creating new exec pod May 20 00:24:27.677: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9353 execpod-affinity8h5p5 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 20 00:24:30.788: INFO: stderr: "I0520 00:24:30.671903 2322 log.go:172] (0xc0009ec000) (0xc0006a6c80) Create stream\nI0520 00:24:30.671963 2322 log.go:172] (0xc0009ec000) (0xc0006a6c80) Stream added, broadcasting: 1\nI0520 00:24:30.675350 2322 log.go:172] (0xc0009ec000) Reply frame received for 1\nI0520 00:24:30.675412 2322 log.go:172] (0xc0009ec000) (0xc000682500) Create stream\nI0520 00:24:30.675449 2322 log.go:172] (0xc0009ec000) (0xc000682500) Stream added, broadcasting: 3\nI0520 00:24:30.676571 2322 log.go:172] (0xc0009ec000) Reply frame received for 3\nI0520 00:24:30.676608 2322 log.go:172] (0xc0009ec000) (0xc0006cd4a0) Create stream\nI0520 00:24:30.676626 2322 log.go:172] (0xc0009ec000) (0xc0006cd4a0) Stream added, broadcasting: 5\nI0520 00:24:30.677875 2322 log.go:172] (0xc0009ec000) Reply frame received for 5\nI0520 00:24:30.768261 2322 log.go:172] (0xc0009ec000) Data frame received for 5\nI0520 00:24:30.768295 2322 log.go:172] (0xc0006cd4a0) (5) Data frame handling\nI0520 00:24:30.768310 2322 log.go:172] (0xc0006cd4a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0520 00:24:30.780346 2322 log.go:172] (0xc0009ec000) Data frame received for 5\nI0520 00:24:30.780376 2322 log.go:172] (0xc0006cd4a0) (5) Data frame handling\nI0520 00:24:30.780397 2322 log.go:172] (0xc0006cd4a0) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0520 00:24:30.780455 2322 log.go:172] (0xc0009ec000) Data frame received for 5\nI0520 00:24:30.780476 2322 log.go:172] (0xc0006cd4a0) (5) Data frame handling\nI0520 00:24:30.780861 2322 log.go:172] (0xc0009ec000) Data frame received for 3\nI0520 00:24:30.780887 2322 log.go:172] (0xc000682500) (3) Data frame handling\nI0520 00:24:30.782365 2322 log.go:172] (0xc0009ec000) Data frame received for 1\nI0520 00:24:30.782395 2322 log.go:172] (0xc0006a6c80) (1) Data frame handling\nI0520 00:24:30.782418 2322 log.go:172] (0xc0006a6c80) (1) Data frame sent\nI0520 00:24:30.782434 2322 log.go:172] (0xc0009ec000) (0xc0006a6c80) Stream removed, broadcasting: 1\nI0520 00:24:30.782456 2322 log.go:172] (0xc0009ec000) Go away received\nI0520 00:24:30.782914 2322 log.go:172] (0xc0009ec000) (0xc0006a6c80) Stream removed, broadcasting: 1\nI0520 00:24:30.782937 2322 log.go:172] (0xc0009ec000) (0xc000682500) Stream removed, broadcasting: 3\nI0520 00:24:30.782949 2322 log.go:172] (0xc0009ec000) (0xc0006cd4a0) Stream removed, broadcasting: 5\n" May 20 00:24:30.788: INFO: stdout: "" May 20 00:24:30.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9353 execpod-affinity8h5p5 -- /bin/sh -x -c nc -zv -t -w 2 10.100.240.43 80' May 20 00:24:31.001: INFO: stderr: "I0520 00:24:30.942260 2352 log.go:172] (0xc00095d1e0) (0xc000722f00) Create stream\nI0520 00:24:30.942335 2352 log.go:172] (0xc00095d1e0) (0xc000722f00) Stream added, broadcasting: 1\nI0520 00:24:30.945369 2352 log.go:172] (0xc00095d1e0) Reply frame received for 1\nI0520 00:24:30.945415 2352 log.go:172] (0xc00095d1e0) (0xc000566fa0) Create stream\nI0520 00:24:30.945431 2352 log.go:172] (0xc00095d1e0) (0xc000566fa0) Stream added, broadcasting: 3\nI0520 00:24:30.946217 2352 log.go:172] (0xc00095d1e0) Reply frame received for 3\nI0520 00:24:30.946242 2352 log.go:172] (0xc00095d1e0) (0xc00030dd60) Create stream\nI0520 00:24:30.946254 2352 log.go:172] (0xc00095d1e0) (0xc00030dd60) Stream added, broadcasting: 5\nI0520 00:24:30.947024 2352 log.go:172] (0xc00095d1e0) Reply frame received for 5\nI0520 00:24:30.993145 2352 log.go:172] (0xc00095d1e0) Data frame received for 3\nI0520 00:24:30.993304 2352 log.go:172] (0xc000566fa0) (3) Data frame handling\nI0520 00:24:30.993415 2352 log.go:172] (0xc00095d1e0) Data frame received for 5\nI0520 00:24:30.993428 2352 log.go:172] (0xc00030dd60) (5) Data frame handling\nI0520 00:24:30.993438 2352 log.go:172] (0xc00030dd60) (5) Data frame sent\nI0520 00:24:30.993443 2352 log.go:172] (0xc00095d1e0) Data frame received for 5\nI0520 00:24:30.993447 2352 log.go:172] (0xc00030dd60) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.240.43 80\nConnection to 10.100.240.43 80 port [tcp/http] succeeded!\nI0520 00:24:30.994837 2352 log.go:172] (0xc00095d1e0) Data frame received for 1\nI0520 00:24:30.994850 2352 log.go:172] (0xc000722f00) (1) Data frame handling\nI0520 00:24:30.994856 2352 log.go:172] (0xc000722f00) (1) Data frame sent\nI0520 00:24:30.994865 2352 log.go:172] (0xc00095d1e0) (0xc000722f00) Stream removed, broadcasting: 1\nI0520 00:24:30.994875 2352 log.go:172] (0xc00095d1e0) Go away received\nI0520 00:24:30.995342 2352 log.go:172] (0xc00095d1e0) (0xc000722f00) Stream removed, broadcasting: 1\nI0520 00:24:30.995365 2352 log.go:172] (0xc00095d1e0) (0xc000566fa0) Stream removed, broadcasting: 3\nI0520 00:24:30.995381 2352 log.go:172] (0xc00095d1e0) (0xc00030dd60) Stream removed, broadcasting: 5\n" May 20 00:24:31.001: INFO: stdout: "" May 20 00:24:31.010: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9353 execpod-affinity8h5p5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.100.240.43:80/ ; done' May 20 00:24:31.337: INFO: stderr: "I0520 00:24:31.161343 2375 log.go:172] (0xc000a1a000) (0xc000167680) Create stream\nI0520 00:24:31.161401 2375 log.go:172] (0xc000a1a000) (0xc000167680) Stream added, broadcasting: 1\nI0520 00:24:31.164094 2375 log.go:172] (0xc000a1a000) Reply frame received for 1\nI0520 00:24:31.164134 2375 log.go:172] (0xc000a1a000) (0xc0004241e0) Create stream\nI0520 00:24:31.164146 2375 log.go:172] (0xc000a1a000) (0xc0004241e0) Stream added, broadcasting: 3\nI0520 00:24:31.165297 2375 log.go:172] (0xc000a1a000) Reply frame received for 3\nI0520 00:24:31.165379 2375 log.go:172] (0xc000a1a000) (0xc0002645a0) Create stream\nI0520 00:24:31.165397 2375 log.go:172] (0xc000a1a000) (0xc0002645a0) Stream added, broadcasting: 5\nI0520 00:24:31.166462 2375 log.go:172] (0xc000a1a000) Reply frame received for 5\nI0520 00:24:31.235734 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.235773 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.235794 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.235831 2375 log.go:172] (0xc000a1a000) Data frame received for 5\nI0520 00:24:31.235875 2375 log.go:172] (0xc0002645a0) (5) Data frame handling\nI0520 00:24:31.235910 2375 log.go:172] (0xc0002645a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.241878 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.241916 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.241941 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.241962 2375 log.go:172] (0xc000a1a000) Data frame received for 5\nI0520 00:24:31.241991 2375 log.go:172] (0xc0002645a0) (5) Data frame handling\nI0520 00:24:31.242002 2375 log.go:172] (0xc0002645a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.242044 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.242086 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.242110 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.248064 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.248096 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.248113 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.248712 2375 log.go:172] (0xc000a1a000) Data frame received for 5\nI0520 00:24:31.248732 2375 log.go:172] (0xc0002645a0) (5) Data frame handling\nI0520 00:24:31.248747 2375 log.go:172] (0xc0002645a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0520 00:24:31.248834 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.248868 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.248882 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.248901 2375 log.go:172] (0xc000a1a000) Data frame received for 5\nI0520 00:24:31.248913 2375 log.go:172] (0xc0002645a0) (5) Data frame handling\nI0520 00:24:31.248944 2375 log.go:172] (0xc0002645a0) (5) Data frame sent\n http://10.100.240.43:80/\nI0520 00:24:31.256330 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.256352 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.256363 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.256800 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.256818 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.256829 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.256850 2375 log.go:172] (0xc000a1a000) Data frame received for 5\nI0520 00:24:31.256877 2375 log.go:172] (0xc0002645a0) (5) Data frame handling\nI0520 00:24:31.256903 2375 log.go:172] (0xc0002645a0) (5) Data frame sent\nI0520 00:24:31.256919 2375 log.go:172] (0xc000a1a000) Data frame received for 5\nI0520 00:24:31.256931 2375 log.go:172] (0xc0002645a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.256961 2375 log.go:172] (0xc0002645a0) (5) Data frame sent\nI0520 00:24:31.262735 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.262760 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.262790 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.263200 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.263219 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.263227 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.263238 2375 log.go:172] (0xc000a1a000) Data frame received for 5\nI0520 00:24:31.263245 2375 log.go:172] (0xc0002645a0) (5) Data frame handling\nI0520 00:24:31.263252 2375 log.go:172] (0xc0002645a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.267327 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.267364 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.267377 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.267983 2375 log.go:172] (0xc000a1a000) Data frame received for 5\nI0520 00:24:31.268012 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.268040 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.268055 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.268069 2375 log.go:172] (0xc0002645a0) (5) Data frame handling\nI0520 00:24:31.268078 2375 log.go:172] (0xc0002645a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.271652 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.271671 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.271686 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.271943 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.271954 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.271970 2375 log.go:172] (0xc000a1a000) Data frame received for 5\nI0520 00:24:31.271985 2375 log.go:172] (0xc0002645a0) (5) Data frame handling\nI0520 00:24:31.271993 2375 log.go:172] (0xc0002645a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.272010 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.276309 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.276327 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.276345 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.277353 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.277379 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.277391 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.277406 2375 log.go:172] (0xc000a1a000) Data frame received for 5\nI0520 00:24:31.277413 2375 log.go:172] (0xc0002645a0) (5) Data frame handling\nI0520 00:24:31.277421 2375 log.go:172] (0xc0002645a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.283453 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.283534 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.283561 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.284410 2375 log.go:172] (0xc000a1a000) Data frame received for 5\nI0520 00:24:31.284426 2375 log.go:172] (0xc0002645a0) (5) Data frame handling\nI0520 00:24:31.284439 2375 log.go:172] (0xc0002645a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.284507 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.284536 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.284554 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.288698 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.288712 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.288731 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.289069 2375 log.go:172] (0xc000a1a000) Data frame received for 5\nI0520 00:24:31.289083 2375 log.go:172] (0xc0002645a0) (5) Data frame handling\nI0520 00:24:31.289094 2375 log.go:172] (0xc0002645a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.289294 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.289317 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.289329 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.293834 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.293858 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.293876 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.294073 2375 log.go:172] (0xc000a1a000) Data frame received for 5\nI0520 00:24:31.294086 2375 log.go:172] (0xc0002645a0) (5) Data frame handling\nI0520 00:24:31.294105 2375 log.go:172] (0xc0002645a0) (5) Data frame sent\nI0520 00:24:31.294120 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.294132 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.294142 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.298707 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.298724 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.298743 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.299273 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.299292 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.299302 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.299318 2375 log.go:172] (0xc000a1a000) Data frame received for 5\nI0520 00:24:31.299326 2375 log.go:172] (0xc0002645a0) (5) Data frame handling\nI0520 00:24:31.299336 2375 log.go:172] (0xc0002645a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.303498 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.303520 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.303544 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.304032 2375 log.go:172] (0xc000a1a000) Data frame received for 5\nI0520 00:24:31.304057 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.304092 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.304124 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.304148 2375 log.go:172] (0xc0002645a0) (5) Data frame handling\nI0520 00:24:31.304175 2375 log.go:172] (0xc0002645a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.309890 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.309908 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.309925 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.310666 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.310715 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.310745 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.310799 2375 log.go:172] (0xc000a1a000) Data frame received for 5\nI0520 00:24:31.310827 2375 log.go:172] (0xc0002645a0) (5) Data frame handling\nI0520 00:24:31.310854 2375 log.go:172] (0xc0002645a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.316854 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.316870 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.316881 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.317506 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.317546 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.317572 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.317609 2375 log.go:172] (0xc000a1a000) Data frame received for 5\nI0520 00:24:31.317644 2375 log.go:172] (0xc0002645a0) (5) Data frame handling\nI0520 00:24:31.317675 2375 log.go:172] (0xc0002645a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.322623 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.322653 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.322678 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.323105 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.323133 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.323149 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.323170 2375 log.go:172] (0xc000a1a000) Data frame received for 5\nI0520 00:24:31.323178 2375 log.go:172] (0xc0002645a0) (5) Data frame handling\nI0520 00:24:31.323188 2375 log.go:172] (0xc0002645a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.327900 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.327923 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.327944 2375 log.go:172] (0xc0004241e0) (3) Data frame sent\nI0520 00:24:31.328608 2375 log.go:172] (0xc000a1a000) Data frame received for 5\nI0520 00:24:31.328646 2375 log.go:172] (0xc0002645a0) (5) Data frame handling\nI0520 00:24:31.328692 2375 log.go:172] (0xc000a1a000) Data frame received for 3\nI0520 00:24:31.328715 2375 log.go:172] (0xc0004241e0) (3) Data frame handling\nI0520 00:24:31.330578 2375 log.go:172] (0xc000a1a000) Data frame received for 1\nI0520 00:24:31.330605 2375 log.go:172] (0xc000167680) (1) Data frame handling\nI0520 00:24:31.330633 2375 log.go:172] (0xc000167680) (1) Data frame sent\nI0520 00:24:31.330649 2375 log.go:172] (0xc000a1a000) (0xc000167680) Stream removed, broadcasting: 1\nI0520 00:24:31.330664 2375 log.go:172] (0xc000a1a000) Go away received\nI0520 00:24:31.331087 2375 log.go:172] (0xc000a1a000) (0xc000167680) Stream removed, broadcasting: 1\nI0520 00:24:31.331109 2375 log.go:172] (0xc000a1a000) (0xc0004241e0) Stream removed, broadcasting: 3\nI0520 00:24:31.331122 2375 log.go:172] (0xc000a1a000) (0xc0002645a0) Stream removed, broadcasting: 5\n" May 20 00:24:31.338: INFO: stdout: "\naffinity-clusterip-transition-zkrxg\naffinity-clusterip-transition-zkrxg\naffinity-clusterip-transition-9h8np\naffinity-clusterip-transition-9h8np\naffinity-clusterip-transition-9h8np\naffinity-clusterip-transition-zkrxg\naffinity-clusterip-transition-9h8np\naffinity-clusterip-transition-cr86q\naffinity-clusterip-transition-9h8np\naffinity-clusterip-transition-zkrxg\naffinity-clusterip-transition-9h8np\naffinity-clusterip-transition-9h8np\naffinity-clusterip-transition-9h8np\naffinity-clusterip-transition-cr86q\naffinity-clusterip-transition-cr86q\naffinity-clusterip-transition-cr86q" May 20 00:24:31.338: INFO: Received response from host: May 20 00:24:31.338: INFO: Received response from host: affinity-clusterip-transition-zkrxg May 20 00:24:31.338: INFO: Received response from host: affinity-clusterip-transition-zkrxg May 20 00:24:31.338: INFO: Received response from host: affinity-clusterip-transition-9h8np May 20 00:24:31.338: INFO: Received response from host: affinity-clusterip-transition-9h8np May 20 00:24:31.338: INFO: Received response from host: affinity-clusterip-transition-9h8np May 20 00:24:31.338: INFO: Received response from host: affinity-clusterip-transition-zkrxg May 20 00:24:31.338: INFO: Received response from host: affinity-clusterip-transition-9h8np May 20 00:24:31.338: INFO: Received response from host: affinity-clusterip-transition-cr86q May 20 00:24:31.338: INFO: Received response from host: affinity-clusterip-transition-9h8np May 20 00:24:31.338: INFO: Received response from host: affinity-clusterip-transition-zkrxg May 20 00:24:31.338: INFO: Received response from host: affinity-clusterip-transition-9h8np May 20 00:24:31.338: INFO: Received response from host: affinity-clusterip-transition-9h8np May 20 00:24:31.338: INFO: Received response from host: affinity-clusterip-transition-9h8np May 20 00:24:31.338: INFO: Received response from host: affinity-clusterip-transition-cr86q May 20 00:24:31.338: INFO: Received response from host: affinity-clusterip-transition-cr86q May 20 00:24:31.338: INFO: Received response from host: affinity-clusterip-transition-cr86q May 20 00:24:31.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9353 execpod-affinity8h5p5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.100.240.43:80/ ; done' May 20 00:24:31.667: INFO: stderr: "I0520 00:24:31.510938 2395 log.go:172] (0xc0000e9a20) (0xc000af05a0) Create stream\nI0520 00:24:31.510998 2395 log.go:172] (0xc0000e9a20) (0xc000af05a0) Stream added, broadcasting: 1\nI0520 00:24:31.516241 2395 log.go:172] (0xc0000e9a20) Reply frame received for 1\nI0520 00:24:31.516286 2395 log.go:172] (0xc0000e9a20) (0xc0008526e0) Create stream\nI0520 00:24:31.516296 2395 log.go:172] (0xc0000e9a20) (0xc0008526e0) Stream added, broadcasting: 3\nI0520 00:24:31.517624 2395 log.go:172] (0xc0000e9a20) Reply frame received for 3\nI0520 00:24:31.517681 2395 log.go:172] (0xc0000e9a20) (0xc00085a000) Create stream\nI0520 00:24:31.517700 2395 log.go:172] (0xc0000e9a20) (0xc00085a000) Stream added, broadcasting: 5\nI0520 00:24:31.518709 2395 log.go:172] (0xc0000e9a20) Reply frame received for 5\nI0520 00:24:31.577820 2395 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0520 00:24:31.577876 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.577903 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.577917 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.577945 2395 log.go:172] (0xc00085a000) (5) Data frame handling\nI0520 00:24:31.577958 2395 log.go:172] (0xc00085a000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.583716 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.583761 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.583797 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.584539 2395 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0520 00:24:31.584566 2395 log.go:172] (0xc00085a000) (5) Data frame handling\nI0520 00:24:31.584590 2395 log.go:172] (0xc00085a000) (5) Data frame sent\n+ echo\nI0520 00:24:31.584635 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.584651 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.584660 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.584696 2395 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0520 00:24:31.584708 2395 log.go:172] (0xc00085a000) (5) Data frame handling\nI0520 00:24:31.584725 2395 log.go:172] (0xc00085a000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.591258 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.591278 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.591294 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.592114 2395 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0520 00:24:31.592138 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.592173 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.592189 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.592201 2395 log.go:172] (0xc00085a000) (5) Data frame handling\nI0520 00:24:31.592207 2395 log.go:172] (0xc00085a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.599287 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.599312 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.599338 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.599724 2395 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0520 00:24:31.599739 2395 log.go:172] (0xc00085a000) (5) Data frame handling\nI0520 00:24:31.599747 2395 log.go:172] (0xc00085a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.599779 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.599796 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.599822 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.604441 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.604479 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.604521 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.605627 2395 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0520 00:24:31.605657 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.605699 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.605720 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.605739 2395 log.go:172] (0xc00085a000) (5) Data frame handling\nI0520 00:24:31.605752 2395 log.go:172] (0xc00085a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.609949 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.609988 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.610026 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.610811 2395 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0520 00:24:31.610833 2395 log.go:172] (0xc00085a000) (5) Data frame handling\nI0520 00:24:31.610854 2395 log.go:172] (0xc00085a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.610909 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.610938 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.610965 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.614479 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.614533 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.614556 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.614907 2395 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0520 00:24:31.614968 2395 log.go:172] (0xc00085a000) (5) Data frame handling\nI0520 00:24:31.615005 2395 log.go:172] (0xc00085a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.615041 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.615061 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.615078 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.619938 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.619964 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.619986 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.620494 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.620512 2395 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0520 00:24:31.620534 2395 log.go:172] (0xc00085a000) (5) Data frame handling\nI0520 00:24:31.620543 2395 log.go:172] (0xc00085a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.620555 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.620566 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.624429 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.624450 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.624464 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.624911 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.624943 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.624962 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.624991 2395 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0520 00:24:31.625008 2395 log.go:172] (0xc00085a000) (5) Data frame handling\nI0520 00:24:31.625031 2395 log.go:172] (0xc00085a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.628773 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.628797 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.628818 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.629327 2395 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0520 00:24:31.629357 2395 log.go:172] (0xc00085a000) (5) Data frame handling\nI0520 00:24:31.629387 2395 log.go:172] (0xc00085a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.629494 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.629513 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.629525 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.633456 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.633471 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.633485 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.633951 2395 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0520 00:24:31.633980 2395 log.go:172] (0xc00085a000) (5) Data frame handling\nI0520 00:24:31.633991 2395 log.go:172] (0xc00085a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.634005 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.634014 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.634028 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.637672 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.637695 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.637704 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.637876 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.637888 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.637910 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.638095 2395 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0520 00:24:31.638116 2395 log.go:172] (0xc00085a000) (5) Data frame handling\nI0520 00:24:31.638122 2395 log.go:172] (0xc00085a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.641917 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.641928 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.641934 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.642304 2395 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0520 00:24:31.642314 2395 log.go:172] (0xc00085a000) (5) Data frame handling\nI0520 00:24:31.642320 2395 log.go:172] (0xc00085a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.642418 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.642444 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.642478 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.645862 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.645872 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.645878 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.646277 2395 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0520 00:24:31.646295 2395 log.go:172] (0xc00085a000) (5) Data frame handling\nI0520 00:24:31.646312 2395 log.go:172] (0xc00085a000) (5) Data frame sent\nI0520 00:24:31.646324 2395 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0520 00:24:31.646341 2395 log.go:172] (0xc00085a000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.646359 2395 log.go:172] (0xc00085a000) (5) Data frame sent\nI0520 00:24:31.646382 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.646398 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.646412 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.650147 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.650174 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.650189 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.650582 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.650617 2395 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0520 00:24:31.650662 2395 log.go:172] (0xc00085a000) (5) Data frame handling\nI0520 00:24:31.650679 2395 log.go:172] (0xc00085a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.240.43:80/\nI0520 00:24:31.650698 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.650718 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.654579 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.654599 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.654608 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.654788 2395 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0520 00:24:31.654812 2395 log.go:172] (0xc00085a000) (5) Data frame handling\nI0520 00:24:31.654838 2395 log.go:172] (0xc00085a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0520 00:24:31.654991 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.655015 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.655026 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.655044 2395 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0520 00:24:31.655070 2395 log.go:172] (0xc00085a000) (5) Data frame handling\nI0520 00:24:31.655093 2395 log.go:172] (0xc00085a000) (5) Data frame sent\n 2 http://10.100.240.43:80/\nI0520 00:24:31.659045 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.659066 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.659087 2395 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0520 00:24:31.659545 2395 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0520 00:24:31.659582 2395 log.go:172] (0xc00085a000) (5) Data frame handling\nI0520 00:24:31.659692 2395 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0520 00:24:31.659704 2395 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0520 00:24:31.661698 2395 log.go:172] (0xc0000e9a20) Data frame received for 1\nI0520 00:24:31.661792 2395 log.go:172] (0xc000af05a0) (1) Data frame handling\nI0520 00:24:31.661867 2395 log.go:172] (0xc000af05a0) (1) Data frame sent\nI0520 00:24:31.661894 2395 log.go:172] (0xc0000e9a20) (0xc000af05a0) Stream removed, broadcasting: 1\nI0520 00:24:31.661910 2395 log.go:172] (0xc0000e9a20) Go away received\nI0520 00:24:31.662207 2395 log.go:172] (0xc0000e9a20) (0xc000af05a0) Stream removed, broadcasting: 1\nI0520 00:24:31.662221 2395 log.go:172] (0xc0000e9a20) (0xc0008526e0) Stream removed, broadcasting: 3\nI0520 00:24:31.662228 2395 log.go:172] (0xc0000e9a20) (0xc00085a000) Stream removed, broadcasting: 5\n" May 20 00:24:31.668: INFO: stdout: "\naffinity-clusterip-transition-cr86q\naffinity-clusterip-transition-cr86q\naffinity-clusterip-transition-cr86q\naffinity-clusterip-transition-cr86q\naffinity-clusterip-transition-cr86q\naffinity-clusterip-transition-cr86q\naffinity-clusterip-transition-cr86q\naffinity-clusterip-transition-cr86q\naffinity-clusterip-transition-cr86q\naffinity-clusterip-transition-cr86q\naffinity-clusterip-transition-cr86q\naffinity-clusterip-transition-cr86q\naffinity-clusterip-transition-cr86q\naffinity-clusterip-transition-cr86q\naffinity-clusterip-transition-cr86q\naffinity-clusterip-transition-cr86q" May 20 00:24:31.668: INFO: Received response from host: May 20 00:24:31.668: INFO: Received response from host: affinity-clusterip-transition-cr86q May 20 00:24:31.668: INFO: Received response from host: affinity-clusterip-transition-cr86q May 20 00:24:31.668: INFO: Received response from host: affinity-clusterip-transition-cr86q May 20 00:24:31.668: INFO: Received response from host: affinity-clusterip-transition-cr86q May 20 00:24:31.668: INFO: Received response from host: affinity-clusterip-transition-cr86q May 20 00:24:31.668: INFO: Received response from host: affinity-clusterip-transition-cr86q May 20 00:24:31.668: INFO: Received response from host: affinity-clusterip-transition-cr86q May 20 00:24:31.668: INFO: Received response from host: affinity-clusterip-transition-cr86q May 20 00:24:31.668: INFO: Received response from host: affinity-clusterip-transition-cr86q May 20 00:24:31.668: INFO: Received response from host: affinity-clusterip-transition-cr86q May 20 00:24:31.668: INFO: Received response from host: affinity-clusterip-transition-cr86q May 20 00:24:31.668: INFO: Received response from host: affinity-clusterip-transition-cr86q May 20 00:24:31.668: INFO: Received response from host: affinity-clusterip-transition-cr86q May 20 00:24:31.668: INFO: Received response from host: affinity-clusterip-transition-cr86q May 20 00:24:31.668: INFO: Received response from host: affinity-clusterip-transition-cr86q May 20 00:24:31.668: INFO: Received response from host: affinity-clusterip-transition-cr86q May 20 00:24:31.668: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-9353, will wait for the garbage collector to delete the pods May 20 00:24:31.862: INFO: Deleting ReplicationController affinity-clusterip-transition took: 101.594657ms May 20 00:24:32.263: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 400.151573ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:24:45.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9353" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:28.960 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":159,"skipped":2563,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:24:45.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-48kw STEP: Creating a pod to test atomic-volume-subpath May 20 00:24:45.578: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-48kw" in namespace "subpath-323" to be "Succeeded or Failed" May 20 00:24:45.632: INFO: Pod "pod-subpath-test-secret-48kw": Phase="Pending", Reason="", readiness=false. Elapsed: 53.985365ms May 20 00:24:47.635: INFO: Pod "pod-subpath-test-secret-48kw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057263638s May 20 00:24:49.639: INFO: Pod "pod-subpath-test-secret-48kw": Phase="Running", Reason="", readiness=true. Elapsed: 4.061662749s May 20 00:24:51.644: INFO: Pod "pod-subpath-test-secret-48kw": Phase="Running", Reason="", readiness=true. Elapsed: 6.066078463s May 20 00:24:53.664: INFO: Pod "pod-subpath-test-secret-48kw": Phase="Running", Reason="", readiness=true. Elapsed: 8.086173261s May 20 00:24:55.676: INFO: Pod "pod-subpath-test-secret-48kw": Phase="Running", Reason="", readiness=true. Elapsed: 10.097927963s May 20 00:24:57.680: INFO: Pod "pod-subpath-test-secret-48kw": Phase="Running", Reason="", readiness=true. Elapsed: 12.102384361s May 20 00:24:59.694: INFO: Pod "pod-subpath-test-secret-48kw": Phase="Running", Reason="", readiness=true. Elapsed: 14.116431445s May 20 00:25:01.699: INFO: Pod "pod-subpath-test-secret-48kw": Phase="Running", Reason="", readiness=true. Elapsed: 16.120949321s May 20 00:25:03.712: INFO: Pod "pod-subpath-test-secret-48kw": Phase="Running", Reason="", readiness=true. Elapsed: 18.133955392s May 20 00:25:05.728: INFO: Pod "pod-subpath-test-secret-48kw": Phase="Running", Reason="", readiness=true. Elapsed: 20.150177731s May 20 00:25:07.732: INFO: Pod "pod-subpath-test-secret-48kw": Phase="Running", Reason="", readiness=true. Elapsed: 22.154111497s May 20 00:25:09.737: INFO: Pod "pod-subpath-test-secret-48kw": Phase="Running", Reason="", readiness=true. Elapsed: 24.159553147s May 20 00:25:11.742: INFO: Pod "pod-subpath-test-secret-48kw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.16408204s STEP: Saw pod success May 20 00:25:11.742: INFO: Pod "pod-subpath-test-secret-48kw" satisfied condition "Succeeded or Failed" May 20 00:25:11.746: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-48kw container test-container-subpath-secret-48kw: STEP: delete the pod May 20 00:25:11.785: INFO: Waiting for pod pod-subpath-test-secret-48kw to disappear May 20 00:25:11.843: INFO: Pod pod-subpath-test-secret-48kw no longer exists STEP: Deleting pod pod-subpath-test-secret-48kw May 20 00:25:11.843: INFO: Deleting pod "pod-subpath-test-secret-48kw" in namespace "subpath-323" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:25:11.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-323" for this suite. • [SLOW TEST:26.415 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":288,"completed":160,"skipped":2574,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:25:11.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-12e0bb55-39cc-44c8-8c2b-566133985fb8 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-12e0bb55-39cc-44c8-8c2b-566133985fb8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:26:28.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7630" for this suite. • [SLOW TEST:76.517 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":161,"skipped":2584,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:26:28.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 20 00:26:32.631: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:26:32.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9103" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":162,"skipped":2599,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:26:32.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0520 00:26:46.199985 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 20 00:26:46.200: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:26:46.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1832" for this suite. • [SLOW TEST:13.509 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":288,"completed":163,"skipped":2601,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:26:46.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 20 00:26:46.307: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76915833-1fa0-4a59-b7f7-7f771702357b" in namespace "projected-9400" to be "Succeeded or Failed" May 20 00:26:46.331: INFO: Pod "downwardapi-volume-76915833-1fa0-4a59-b7f7-7f771702357b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.73573ms May 20 00:26:48.359: INFO: Pod "downwardapi-volume-76915833-1fa0-4a59-b7f7-7f771702357b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052378549s May 20 00:26:50.364: INFO: Pod "downwardapi-volume-76915833-1fa0-4a59-b7f7-7f771702357b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056645148s STEP: Saw pod success May 20 00:26:50.364: INFO: Pod "downwardapi-volume-76915833-1fa0-4a59-b7f7-7f771702357b" satisfied condition "Succeeded or Failed" May 20 00:26:50.367: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-76915833-1fa0-4a59-b7f7-7f771702357b container client-container: STEP: delete the pod May 20 00:26:50.431: INFO: Waiting for pod downwardapi-volume-76915833-1fa0-4a59-b7f7-7f771702357b to disappear May 20 00:26:50.438: INFO: Pod downwardapi-volume-76915833-1fa0-4a59-b7f7-7f771702357b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:26:50.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9400" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":164,"skipped":2616,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:26:50.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-f8e7ae1a-345c-4be9-b3b6-ceb04c5ddcd3 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:26:50.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1177" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":288,"completed":165,"skipped":2629,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:26:50.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-6e2baa61-a9fb-40de-b4eb-7b9b3733cb6a STEP: updating the pod May 20 00:27:01.123: INFO: Successfully updated pod "var-expansion-6e2baa61-a9fb-40de-b4eb-7b9b3733cb6a" STEP: waiting for pod and container restart STEP: Failing liveness probe May 20 00:27:01.162: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-7906 PodName:var-expansion-6e2baa61-a9fb-40de-b4eb-7b9b3733cb6a ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 00:27:01.162: INFO: >>> kubeConfig: /root/.kube/config I0520 00:27:01.195917 7 log.go:172] (0xc002fe8840) (0xc00267a3c0) Create stream I0520 00:27:01.195961 7 log.go:172] (0xc002fe8840) (0xc00267a3c0) Stream added, broadcasting: 1 I0520 00:27:01.197904 7 log.go:172] (0xc002fe8840) Reply frame received for 1 I0520 00:27:01.197940 7 log.go:172] (0xc002fe8840) (0xc001912a00) Create stream I0520 00:27:01.197947 7 log.go:172] (0xc002fe8840) (0xc001912a00) Stream added, broadcasting: 3 I0520 00:27:01.198782 7 log.go:172] (0xc002fe8840) Reply frame received for 3 I0520 00:27:01.198806 7 log.go:172] (0xc002fe8840) (0xc00267a460) Create stream I0520 00:27:01.198814 7 log.go:172] (0xc002fe8840) (0xc00267a460) Stream added, broadcasting: 5 I0520 00:27:01.199643 7 log.go:172] (0xc002fe8840) Reply frame received for 5 I0520 00:27:01.285616 7 log.go:172] (0xc002fe8840) Data frame received for 3 I0520 00:27:01.285650 7 log.go:172] (0xc001912a00) (3) Data frame handling I0520 00:27:01.285822 7 log.go:172] (0xc002fe8840) Data frame received for 5 I0520 00:27:01.285875 7 log.go:172] (0xc00267a460) (5) Data frame handling I0520 00:27:01.287081 7 log.go:172] (0xc002fe8840) Data frame received for 1 I0520 00:27:01.287105 7 log.go:172] (0xc00267a3c0) (1) Data frame handling I0520 00:27:01.287129 7 log.go:172] (0xc00267a3c0) (1) Data frame sent I0520 00:27:01.287144 7 log.go:172] (0xc002fe8840) (0xc00267a3c0) Stream removed, broadcasting: 1 I0520 00:27:01.287166 7 log.go:172] (0xc002fe8840) Go away received I0520 00:27:01.287340 7 log.go:172] (0xc002fe8840) (0xc00267a3c0) Stream removed, broadcasting: 1 I0520 00:27:01.287359 7 log.go:172] (0xc002fe8840) (0xc001912a00) Stream removed, broadcasting: 3 I0520 00:27:01.287369 7 log.go:172] (0xc002fe8840) (0xc00267a460) Stream removed, broadcasting: 5 May 20 00:27:01.287: INFO: Pod exec output: / STEP: Waiting for container to restart May 20 00:27:01.291: INFO: Container dapi-container, restarts: 0 May 20 00:27:11.296: INFO: Container dapi-container, restarts: 0 May 20 00:27:21.296: INFO: Container dapi-container, restarts: 0 May 20 00:27:31.295: INFO: Container dapi-container, restarts: 0 May 20 00:27:41.295: INFO: Container dapi-container, restarts: 1 May 20 00:27:41.295: INFO: Container has restart count: 1 STEP: Rewriting the file May 20 00:27:41.295: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-7906 PodName:var-expansion-6e2baa61-a9fb-40de-b4eb-7b9b3733cb6a ContainerName:side-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 00:27:41.295: INFO: >>> kubeConfig: /root/.kube/config I0520 00:27:41.330471 7 log.go:172] (0xc00203c370) (0xc00201d680) Create stream I0520 00:27:41.330505 7 log.go:172] (0xc00203c370) (0xc00201d680) Stream added, broadcasting: 1 I0520 00:27:41.332434 7 log.go:172] (0xc00203c370) Reply frame received for 1 I0520 00:27:41.332475 7 log.go:172] (0xc00203c370) (0xc00201d720) Create stream I0520 00:27:41.332484 7 log.go:172] (0xc00203c370) (0xc00201d720) Stream added, broadcasting: 3 I0520 00:27:41.333810 7 log.go:172] (0xc00203c370) Reply frame received for 3 I0520 00:27:41.333847 7 log.go:172] (0xc00203c370) (0xc00201d7c0) Create stream I0520 00:27:41.333863 7 log.go:172] (0xc00203c370) (0xc00201d7c0) Stream added, broadcasting: 5 I0520 00:27:41.335705 7 log.go:172] (0xc00203c370) Reply frame received for 5 I0520 00:27:41.422086 7 log.go:172] (0xc00203c370) Data frame received for 5 I0520 00:27:41.422140 7 log.go:172] (0xc00203c370) Data frame received for 3 I0520 00:27:41.422196 7 log.go:172] (0xc00201d720) (3) Data frame handling I0520 00:27:41.422234 7 log.go:172] (0xc00201d7c0) (5) Data frame handling I0520 00:27:41.423468 7 log.go:172] (0xc00203c370) Data frame received for 1 I0520 00:27:41.423487 7 log.go:172] (0xc00201d680) (1) Data frame handling I0520 00:27:41.423506 7 log.go:172] (0xc00201d680) (1) Data frame sent I0520 00:27:41.423524 7 log.go:172] (0xc00203c370) (0xc00201d680) Stream removed, broadcasting: 1 I0520 00:27:41.423561 7 log.go:172] (0xc00203c370) Go away received I0520 00:27:41.423706 7 log.go:172] (0xc00203c370) (0xc00201d680) Stream removed, broadcasting: 1 I0520 00:27:41.423738 7 log.go:172] (0xc00203c370) (0xc00201d720) Stream removed, broadcasting: 3 I0520 00:27:41.423762 7 log.go:172] (0xc00203c370) (0xc00201d7c0) Stream removed, broadcasting: 5 May 20 00:27:41.423: INFO: Exec stderr: "" May 20 00:27:41.423: INFO: Pod exec output: STEP: Waiting for container to stop restarting May 20 00:28:09.431: INFO: Container has restart count: 2 May 20 00:29:11.432: INFO: Container restart has stabilized STEP: test for subpath mounted with old value May 20 00:29:11.436: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-7906 PodName:var-expansion-6e2baa61-a9fb-40de-b4eb-7b9b3733cb6a ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 00:29:11.436: INFO: >>> kubeConfig: /root/.kube/config I0520 00:29:11.469324 7 log.go:172] (0xc00189efd0) (0xc002b13540) Create stream I0520 00:29:11.469353 7 log.go:172] (0xc00189efd0) (0xc002b13540) Stream added, broadcasting: 1 I0520 00:29:11.470864 7 log.go:172] (0xc00189efd0) Reply frame received for 1 I0520 00:29:11.470888 7 log.go:172] (0xc00189efd0) (0xc00201cfa0) Create stream I0520 00:29:11.470897 7 log.go:172] (0xc00189efd0) (0xc00201cfa0) Stream added, broadcasting: 3 I0520 00:29:11.471591 7 log.go:172] (0xc00189efd0) Reply frame received for 3 I0520 00:29:11.471614 7 log.go:172] (0xc00189efd0) (0xc001ca8780) Create stream I0520 00:29:11.471622 7 log.go:172] (0xc00189efd0) (0xc001ca8780) Stream added, broadcasting: 5 I0520 00:29:11.472266 7 log.go:172] (0xc00189efd0) Reply frame received for 5 I0520 00:29:11.523013 7 log.go:172] (0xc00189efd0) Data frame received for 3 I0520 00:29:11.523033 7 log.go:172] (0xc00201cfa0) (3) Data frame handling I0520 00:29:11.523048 7 log.go:172] (0xc00189efd0) Data frame received for 5 I0520 00:29:11.523053 7 log.go:172] (0xc001ca8780) (5) Data frame handling I0520 00:29:11.526799 7 log.go:172] (0xc00189efd0) Data frame received for 1 I0520 00:29:11.526816 7 log.go:172] (0xc002b13540) (1) Data frame handling I0520 00:29:11.526833 7 log.go:172] (0xc002b13540) (1) Data frame sent I0520 00:29:11.526846 7 log.go:172] (0xc00189efd0) (0xc002b13540) Stream removed, broadcasting: 1 I0520 00:29:11.526893 7 log.go:172] (0xc00189efd0) (0xc002b13540) Stream removed, broadcasting: 1 I0520 00:29:11.526903 7 log.go:172] (0xc00189efd0) (0xc00201cfa0) Stream removed, broadcasting: 3 I0520 00:29:11.526975 7 log.go:172] (0xc00189efd0) Go away received I0520 00:29:11.527005 7 log.go:172] (0xc00189efd0) (0xc001ca8780) Stream removed, broadcasting: 5 May 20 00:29:11.530: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-7906 PodName:var-expansion-6e2baa61-a9fb-40de-b4eb-7b9b3733cb6a ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 00:29:11.530: INFO: >>> kubeConfig: /root/.kube/config I0520 00:29:11.565046 7 log.go:172] (0xc00189f550) (0xc002b13ae0) Create stream I0520 00:29:11.565075 7 log.go:172] (0xc00189f550) (0xc002b13ae0) Stream added, broadcasting: 1 I0520 00:29:11.567126 7 log.go:172] (0xc00189f550) Reply frame received for 1 I0520 00:29:11.567163 7 log.go:172] (0xc00189f550) (0xc001833860) Create stream I0520 00:29:11.567181 7 log.go:172] (0xc00189f550) (0xc001833860) Stream added, broadcasting: 3 I0520 00:29:11.568097 7 log.go:172] (0xc00189f550) Reply frame received for 3 I0520 00:29:11.568130 7 log.go:172] (0xc00189f550) (0xc00201d0e0) Create stream I0520 00:29:11.568141 7 log.go:172] (0xc00189f550) (0xc00201d0e0) Stream added, broadcasting: 5 I0520 00:29:11.569083 7 log.go:172] (0xc00189f550) Reply frame received for 5 I0520 00:29:11.642512 7 log.go:172] (0xc00189f550) Data frame received for 3 I0520 00:29:11.642543 7 log.go:172] (0xc001833860) (3) Data frame handling I0520 00:29:11.642565 7 log.go:172] (0xc00189f550) Data frame received for 5 I0520 00:29:11.642577 7 log.go:172] (0xc00201d0e0) (5) Data frame handling I0520 00:29:11.644214 7 log.go:172] (0xc00189f550) Data frame received for 1 I0520 00:29:11.644232 7 log.go:172] (0xc002b13ae0) (1) Data frame handling I0520 00:29:11.644265 7 log.go:172] (0xc002b13ae0) (1) Data frame sent I0520 00:29:11.644279 7 log.go:172] (0xc00189f550) (0xc002b13ae0) Stream removed, broadcasting: 1 I0520 00:29:11.644329 7 log.go:172] (0xc00189f550) (0xc002b13ae0) Stream removed, broadcasting: 1 I0520 00:29:11.644341 7 log.go:172] (0xc00189f550) (0xc001833860) Stream removed, broadcasting: 3 I0520 00:29:11.644352 7 log.go:172] (0xc00189f550) (0xc00201d0e0) Stream removed, broadcasting: 5 May 20 00:29:11.644: INFO: Deleting pod "var-expansion-6e2baa61-a9fb-40de-b4eb-7b9b3733cb6a" in namespace "var-expansion-7906" I0520 00:29:11.644466 7 log.go:172] (0xc00189f550) Go away received May 20 00:29:11.650: INFO: Wait up to 5m0s for pod "var-expansion-6e2baa61-a9fb-40de-b4eb-7b9b3733cb6a" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:29:55.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7906" for this suite. • [SLOW TEST:185.150 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":288,"completed":166,"skipped":2648,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:29:55.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-bfbe8cac-6978-47a3-a13d-081dfcc52194 STEP: Creating secret with name s-test-opt-upd-de6485bf-c4a9-4df4-b620-0cda2a0469b6 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-bfbe8cac-6978-47a3-a13d-081dfcc52194 STEP: Updating secret s-test-opt-upd-de6485bf-c4a9-4df4-b620-0cda2a0469b6 STEP: Creating secret with name s-test-opt-create-eb530da5-cbd7-42d8-b834-c9eaf52cb12a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:30:03.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4314" for this suite. • [SLOW TEST:8.275 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":167,"skipped":2678,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:30:03.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:30:08.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6891" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":168,"skipped":2703,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:30:08.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-a8b38d23-0712-4b0a-83ec-36851b23e018 STEP: Creating a pod to test consume secrets May 20 00:30:08.196: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-db71aa9e-9b8c-4305-a1b5-1eb708b0476f" in namespace "projected-3445" to be "Succeeded or Failed" May 20 00:30:08.216: INFO: Pod "pod-projected-secrets-db71aa9e-9b8c-4305-a1b5-1eb708b0476f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.385592ms May 20 00:30:10.223: INFO: Pod "pod-projected-secrets-db71aa9e-9b8c-4305-a1b5-1eb708b0476f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026350445s May 20 00:30:12.227: INFO: Pod "pod-projected-secrets-db71aa9e-9b8c-4305-a1b5-1eb708b0476f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030415599s STEP: Saw pod success May 20 00:30:12.227: INFO: Pod "pod-projected-secrets-db71aa9e-9b8c-4305-a1b5-1eb708b0476f" satisfied condition "Succeeded or Failed" May 20 00:30:12.230: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-db71aa9e-9b8c-4305-a1b5-1eb708b0476f container projected-secret-volume-test: STEP: delete the pod May 20 00:30:12.421: INFO: Waiting for pod pod-projected-secrets-db71aa9e-9b8c-4305-a1b5-1eb708b0476f to disappear May 20 00:30:12.563: INFO: Pod pod-projected-secrets-db71aa9e-9b8c-4305-a1b5-1eb708b0476f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:30:12.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3445" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":169,"skipped":2746,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:30:12.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8682 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-8682 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8682 May 20 00:30:12.768: INFO: Found 0 stateful pods, waiting for 1 May 20 00:30:22.773: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 20 00:30:22.777: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8682 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 20 00:30:23.063: INFO: stderr: "I0520 00:30:22.938815 2417 log.go:172] (0xc0000e8c60) (0xc00052f0e0) Create stream\nI0520 00:30:22.938885 2417 log.go:172] (0xc0000e8c60) (0xc00052f0e0) Stream added, broadcasting: 1\nI0520 00:30:22.941652 2417 log.go:172] (0xc0000e8c60) Reply frame received for 1\nI0520 00:30:22.941721 2417 log.go:172] (0xc0000e8c60) (0xc000432e60) Create stream\nI0520 00:30:22.941749 2417 log.go:172] (0xc0000e8c60) (0xc000432e60) Stream added, broadcasting: 3\nI0520 00:30:22.943105 2417 log.go:172] (0xc0000e8c60) Reply frame received for 3\nI0520 00:30:22.943154 2417 log.go:172] (0xc0000e8c60) (0xc00042e320) Create stream\nI0520 00:30:22.943177 2417 log.go:172] (0xc0000e8c60) (0xc00042e320) Stream added, broadcasting: 5\nI0520 00:30:22.944386 2417 log.go:172] (0xc0000e8c60) Reply frame received for 5\nI0520 00:30:23.038315 2417 log.go:172] (0xc0000e8c60) Data frame received for 5\nI0520 00:30:23.038382 2417 log.go:172] (0xc00042e320) (5) Data frame handling\nI0520 00:30:23.038429 2417 log.go:172] (0xc00042e320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0520 00:30:23.057955 2417 log.go:172] (0xc0000e8c60) Data frame received for 3\nI0520 00:30:23.057999 2417 log.go:172] (0xc000432e60) (3) Data frame handling\nI0520 00:30:23.058022 2417 log.go:172] (0xc000432e60) (3) Data frame sent\nI0520 00:30:23.058037 2417 log.go:172] (0xc0000e8c60) Data frame received for 5\nI0520 00:30:23.058048 2417 log.go:172] (0xc00042e320) (5) Data frame handling\nI0520 00:30:23.058063 2417 log.go:172] (0xc0000e8c60) Data frame received for 3\nI0520 00:30:23.058070 2417 log.go:172] (0xc000432e60) (3) Data frame handling\nI0520 00:30:23.059859 2417 log.go:172] (0xc0000e8c60) Data frame received for 1\nI0520 00:30:23.059871 2417 log.go:172] (0xc00052f0e0) (1) Data frame handling\nI0520 00:30:23.059887 2417 log.go:172] (0xc00052f0e0) (1) Data frame sent\nI0520 00:30:23.059998 2417 log.go:172] (0xc0000e8c60) (0xc00052f0e0) Stream removed, broadcasting: 1\nI0520 00:30:23.060027 2417 log.go:172] (0xc0000e8c60) Go away received\nI0520 00:30:23.060293 2417 log.go:172] (0xc0000e8c60) (0xc00052f0e0) Stream removed, broadcasting: 1\nI0520 00:30:23.060312 2417 log.go:172] (0xc0000e8c60) (0xc000432e60) Stream removed, broadcasting: 3\nI0520 00:30:23.060320 2417 log.go:172] (0xc0000e8c60) (0xc00042e320) Stream removed, broadcasting: 5\n" May 20 00:30:23.063: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 20 00:30:23.063: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 20 00:30:23.067: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 20 00:30:33.071: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 20 00:30:33.071: INFO: Waiting for statefulset status.replicas updated to 0 May 20 00:30:33.115: INFO: POD NODE PHASE GRACE CONDITIONS May 20 00:30:33.115: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:12 +0000 UTC }] May 20 00:30:33.115: INFO: May 20 00:30:33.115: INFO: StatefulSet ss has not reached scale 3, at 1 May 20 00:30:34.120: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.967082272s May 20 00:30:35.242: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.961745979s May 20 00:30:36.247: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.840025127s May 20 00:30:37.252: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.835111453s May 20 00:30:38.262: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.830794042s May 20 00:30:39.267: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.820491429s May 20 00:30:40.272: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.815713452s May 20 00:30:41.276: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.810734451s May 20 00:30:42.280: INFO: Verifying statefulset ss doesn't scale past 3 for another 806.548187ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8682 May 20 00:30:43.286: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8682 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 20 00:30:43.728: INFO: stderr: "I0520 00:30:43.599835 2439 log.go:172] (0xc000bd6f20) (0xc000307b80) Create stream\nI0520 00:30:43.599925 2439 log.go:172] (0xc000bd6f20) (0xc000307b80) Stream added, broadcasting: 1\nI0520 00:30:43.603106 2439 log.go:172] (0xc000bd6f20) Reply frame received for 1\nI0520 00:30:43.603149 2439 log.go:172] (0xc000bd6f20) (0xc0006a9040) Create stream\nI0520 00:30:43.603157 2439 log.go:172] (0xc000bd6f20) (0xc0006a9040) Stream added, broadcasting: 3\nI0520 00:30:43.603958 2439 log.go:172] (0xc000bd6f20) Reply frame received for 3\nI0520 00:30:43.603988 2439 log.go:172] (0xc000bd6f20) (0xc00068c640) Create stream\nI0520 00:30:43.603995 2439 log.go:172] (0xc000bd6f20) (0xc00068c640) Stream added, broadcasting: 5\nI0520 00:30:43.604647 2439 log.go:172] (0xc000bd6f20) Reply frame received for 5\nI0520 00:30:43.666672 2439 log.go:172] (0xc000bd6f20) Data frame received for 5\nI0520 00:30:43.666699 2439 log.go:172] (0xc00068c640) (5) Data frame handling\nI0520 00:30:43.666712 2439 log.go:172] (0xc00068c640) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0520 00:30:43.722087 2439 log.go:172] (0xc000bd6f20) Data frame received for 3\nI0520 00:30:43.722129 2439 log.go:172] (0xc0006a9040) (3) Data frame handling\nI0520 00:30:43.722160 2439 log.go:172] (0xc0006a9040) (3) Data frame sent\nI0520 00:30:43.722188 2439 log.go:172] (0xc000bd6f20) Data frame received for 5\nI0520 00:30:43.722205 2439 log.go:172] (0xc00068c640) (5) Data frame handling\nI0520 00:30:43.722390 2439 log.go:172] (0xc000bd6f20) Data frame received for 3\nI0520 00:30:43.722411 2439 log.go:172] (0xc0006a9040) (3) Data frame handling\nI0520 00:30:43.724081 2439 log.go:172] (0xc000bd6f20) Data frame received for 1\nI0520 00:30:43.724097 2439 log.go:172] (0xc000307b80) (1) Data frame handling\nI0520 00:30:43.724104 2439 log.go:172] (0xc000307b80) (1) Data frame sent\nI0520 00:30:43.724113 2439 log.go:172] (0xc000bd6f20) (0xc000307b80) Stream removed, broadcasting: 1\nI0520 00:30:43.724124 2439 log.go:172] (0xc000bd6f20) Go away received\nI0520 00:30:43.724513 2439 log.go:172] (0xc000bd6f20) (0xc000307b80) Stream removed, broadcasting: 1\nI0520 00:30:43.724531 2439 log.go:172] (0xc000bd6f20) (0xc0006a9040) Stream removed, broadcasting: 3\nI0520 00:30:43.724538 2439 log.go:172] (0xc000bd6f20) (0xc00068c640) Stream removed, broadcasting: 5\n" May 20 00:30:43.728: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 20 00:30:43.728: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 20 00:30:43.728: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8682 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 20 00:30:43.974: INFO: stderr: "I0520 00:30:43.892062 2459 log.go:172] (0xc00003afd0) (0xc0009001e0) Create stream\nI0520 00:30:43.892261 2459 log.go:172] (0xc00003afd0) (0xc0009001e0) Stream added, broadcasting: 1\nI0520 00:30:43.895135 2459 log.go:172] (0xc00003afd0) Reply frame received for 1\nI0520 00:30:43.895201 2459 log.go:172] (0xc00003afd0) (0xc000917360) Create stream\nI0520 00:30:43.895215 2459 log.go:172] (0xc00003afd0) (0xc000917360) Stream added, broadcasting: 3\nI0520 00:30:43.896138 2459 log.go:172] (0xc00003afd0) Reply frame received for 3\nI0520 00:30:43.896190 2459 log.go:172] (0xc00003afd0) (0xc00093ab40) Create stream\nI0520 00:30:43.896214 2459 log.go:172] (0xc00003afd0) (0xc00093ab40) Stream added, broadcasting: 5\nI0520 00:30:43.897078 2459 log.go:172] (0xc00003afd0) Reply frame received for 5\nI0520 00:30:43.967204 2459 log.go:172] (0xc00003afd0) Data frame received for 5\nI0520 00:30:43.967246 2459 log.go:172] (0xc00093ab40) (5) Data frame handling\nI0520 00:30:43.967266 2459 log.go:172] (0xc00093ab40) (5) Data frame sent\nI0520 00:30:43.967276 2459 log.go:172] (0xc00003afd0) Data frame received for 5\nI0520 00:30:43.967284 2459 log.go:172] (0xc00093ab40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0520 00:30:43.967307 2459 log.go:172] (0xc00003afd0) Data frame received for 3\nI0520 00:30:43.967316 2459 log.go:172] (0xc000917360) (3) Data frame handling\nI0520 00:30:43.967332 2459 log.go:172] (0xc000917360) (3) Data frame sent\nI0520 00:30:43.967350 2459 log.go:172] (0xc00003afd0) Data frame received for 3\nI0520 00:30:43.967359 2459 log.go:172] (0xc000917360) (3) Data frame handling\nI0520 00:30:43.968827 2459 log.go:172] (0xc00003afd0) Data frame received for 1\nI0520 00:30:43.968848 2459 log.go:172] (0xc0009001e0) (1) Data frame handling\nI0520 00:30:43.968864 2459 log.go:172] (0xc0009001e0) (1) Data frame sent\nI0520 00:30:43.968876 2459 log.go:172] (0xc00003afd0) (0xc0009001e0) Stream removed, broadcasting: 1\nI0520 00:30:43.968907 2459 log.go:172] (0xc00003afd0) Go away received\nI0520 00:30:43.969612 2459 log.go:172] (0xc00003afd0) (0xc0009001e0) Stream removed, broadcasting: 1\nI0520 00:30:43.969659 2459 log.go:172] (0xc00003afd0) (0xc000917360) Stream removed, broadcasting: 3\nI0520 00:30:43.969680 2459 log.go:172] (0xc00003afd0) (0xc00093ab40) Stream removed, broadcasting: 5\n" May 20 00:30:43.975: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 20 00:30:43.975: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 20 00:30:43.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8682 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 20 00:30:44.177: INFO: stderr: "I0520 00:30:44.101268 2479 log.go:172] (0xc000c1d6b0) (0xc000ac0460) Create stream\nI0520 00:30:44.101327 2479 log.go:172] (0xc000c1d6b0) (0xc000ac0460) Stream added, broadcasting: 1\nI0520 00:30:44.105651 2479 log.go:172] (0xc000c1d6b0) Reply frame received for 1\nI0520 00:30:44.105685 2479 log.go:172] (0xc000c1d6b0) (0xc0006acf00) Create stream\nI0520 00:30:44.105693 2479 log.go:172] (0xc000c1d6b0) (0xc0006acf00) Stream added, broadcasting: 3\nI0520 00:30:44.106434 2479 log.go:172] (0xc000c1d6b0) Reply frame received for 3\nI0520 00:30:44.106471 2479 log.go:172] (0xc000c1d6b0) (0xc000307220) Create stream\nI0520 00:30:44.106479 2479 log.go:172] (0xc000c1d6b0) (0xc000307220) Stream added, broadcasting: 5\nI0520 00:30:44.107305 2479 log.go:172] (0xc000c1d6b0) Reply frame received for 5\nI0520 00:30:44.171450 2479 log.go:172] (0xc000c1d6b0) Data frame received for 5\nI0520 00:30:44.171481 2479 log.go:172] (0xc000307220) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0520 00:30:44.171507 2479 log.go:172] (0xc000c1d6b0) Data frame received for 3\nI0520 00:30:44.171548 2479 log.go:172] (0xc0006acf00) (3) Data frame handling\nI0520 00:30:44.171561 2479 log.go:172] (0xc0006acf00) (3) Data frame sent\nI0520 00:30:44.171572 2479 log.go:172] (0xc000c1d6b0) Data frame received for 3\nI0520 00:30:44.171581 2479 log.go:172] (0xc0006acf00) (3) Data frame handling\nI0520 00:30:44.171632 2479 log.go:172] (0xc000307220) (5) Data frame sent\nI0520 00:30:44.171711 2479 log.go:172] (0xc000c1d6b0) Data frame received for 5\nI0520 00:30:44.171730 2479 log.go:172] (0xc000307220) (5) Data frame handling\nI0520 00:30:44.173352 2479 log.go:172] (0xc000c1d6b0) Data frame received for 1\nI0520 00:30:44.173378 2479 log.go:172] (0xc000ac0460) (1) Data frame handling\nI0520 00:30:44.173416 2479 log.go:172] (0xc000ac0460) (1) Data frame sent\nI0520 00:30:44.173443 2479 log.go:172] (0xc000c1d6b0) (0xc000ac0460) Stream removed, broadcasting: 1\nI0520 00:30:44.173671 2479 log.go:172] (0xc000c1d6b0) Go away received\nI0520 00:30:44.173719 2479 log.go:172] (0xc000c1d6b0) (0xc000ac0460) Stream removed, broadcasting: 1\nI0520 00:30:44.173743 2479 log.go:172] (0xc000c1d6b0) (0xc0006acf00) Stream removed, broadcasting: 3\nI0520 00:30:44.173758 2479 log.go:172] (0xc000c1d6b0) (0xc000307220) Stream removed, broadcasting: 5\n" May 20 00:30:44.177: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 20 00:30:44.177: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 20 00:30:44.182: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 20 00:30:54.187: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 20 00:30:54.187: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 20 00:30:54.187: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 20 00:30:54.193: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8682 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 20 00:30:54.424: INFO: stderr: "I0520 00:30:54.323698 2500 log.go:172] (0xc000714370) (0xc000554e60) Create stream\nI0520 00:30:54.323752 2500 log.go:172] (0xc000714370) (0xc000554e60) Stream added, broadcasting: 1\nI0520 00:30:54.325853 2500 log.go:172] (0xc000714370) Reply frame received for 1\nI0520 00:30:54.325902 2500 log.go:172] (0xc000714370) (0xc0000df0e0) Create stream\nI0520 00:30:54.325916 2500 log.go:172] (0xc000714370) (0xc0000df0e0) Stream added, broadcasting: 3\nI0520 00:30:54.326976 2500 log.go:172] (0xc000714370) Reply frame received for 3\nI0520 00:30:54.327039 2500 log.go:172] (0xc000714370) (0xc00013d7c0) Create stream\nI0520 00:30:54.327055 2500 log.go:172] (0xc000714370) (0xc00013d7c0) Stream added, broadcasting: 5\nI0520 00:30:54.327891 2500 log.go:172] (0xc000714370) Reply frame received for 5\nI0520 00:30:54.418046 2500 log.go:172] (0xc000714370) Data frame received for 3\nI0520 00:30:54.418089 2500 log.go:172] (0xc0000df0e0) (3) Data frame handling\nI0520 00:30:54.418103 2500 log.go:172] (0xc0000df0e0) (3) Data frame sent\nI0520 00:30:54.418110 2500 log.go:172] (0xc000714370) Data frame received for 3\nI0520 00:30:54.418116 2500 log.go:172] (0xc0000df0e0) (3) Data frame handling\nI0520 00:30:54.418144 2500 log.go:172] (0xc000714370) Data frame received for 5\nI0520 00:30:54.418151 2500 log.go:172] (0xc00013d7c0) (5) Data frame handling\nI0520 00:30:54.418162 2500 log.go:172] (0xc00013d7c0) (5) Data frame sent\nI0520 00:30:54.418168 2500 log.go:172] (0xc000714370) Data frame received for 5\nI0520 00:30:54.418174 2500 log.go:172] (0xc00013d7c0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0520 00:30:54.419755 2500 log.go:172] (0xc000714370) Data frame received for 1\nI0520 00:30:54.419780 2500 log.go:172] (0xc000554e60) (1) Data frame handling\nI0520 00:30:54.419812 2500 log.go:172] (0xc000554e60) (1) Data frame sent\nI0520 00:30:54.419832 2500 log.go:172] (0xc000714370) (0xc000554e60) Stream removed, broadcasting: 1\nI0520 00:30:54.419851 2500 log.go:172] (0xc000714370) Go away received\nI0520 00:30:54.420303 2500 log.go:172] (0xc000714370) (0xc000554e60) Stream removed, broadcasting: 1\nI0520 00:30:54.420320 2500 log.go:172] (0xc000714370) (0xc0000df0e0) Stream removed, broadcasting: 3\nI0520 00:30:54.420328 2500 log.go:172] (0xc000714370) (0xc00013d7c0) Stream removed, broadcasting: 5\n" May 20 00:30:54.424: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 20 00:30:54.424: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 20 00:30:54.424: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8682 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 20 00:30:54.695: INFO: stderr: "I0520 00:30:54.564518 2522 log.go:172] (0xc000536fd0) (0xc000ad06e0) Create stream\nI0520 00:30:54.564574 2522 log.go:172] (0xc000536fd0) (0xc000ad06e0) Stream added, broadcasting: 1\nI0520 00:30:54.570128 2522 log.go:172] (0xc000536fd0) Reply frame received for 1\nI0520 00:30:54.570174 2522 log.go:172] (0xc000536fd0) (0xc0005e2640) Create stream\nI0520 00:30:54.570182 2522 log.go:172] (0xc000536fd0) (0xc0005e2640) Stream added, broadcasting: 3\nI0520 00:30:54.571243 2522 log.go:172] (0xc000536fd0) Reply frame received for 3\nI0520 00:30:54.571282 2522 log.go:172] (0xc000536fd0) (0xc0004d4e60) Create stream\nI0520 00:30:54.571294 2522 log.go:172] (0xc000536fd0) (0xc0004d4e60) Stream added, broadcasting: 5\nI0520 00:30:54.572086 2522 log.go:172] (0xc000536fd0) Reply frame received for 5\nI0520 00:30:54.632141 2522 log.go:172] (0xc000536fd0) Data frame received for 5\nI0520 00:30:54.632172 2522 log.go:172] (0xc0004d4e60) (5) Data frame handling\nI0520 00:30:54.632192 2522 log.go:172] (0xc0004d4e60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0520 00:30:54.688136 2522 log.go:172] (0xc000536fd0) Data frame received for 3\nI0520 00:30:54.688169 2522 log.go:172] (0xc0005e2640) (3) Data frame handling\nI0520 00:30:54.688189 2522 log.go:172] (0xc0005e2640) (3) Data frame sent\nI0520 00:30:54.688199 2522 log.go:172] (0xc000536fd0) Data frame received for 3\nI0520 00:30:54.688206 2522 log.go:172] (0xc0005e2640) (3) Data frame handling\nI0520 00:30:54.688351 2522 log.go:172] (0xc000536fd0) Data frame received for 5\nI0520 00:30:54.688380 2522 log.go:172] (0xc0004d4e60) (5) Data frame handling\nI0520 00:30:54.690601 2522 log.go:172] (0xc000536fd0) Data frame received for 1\nI0520 00:30:54.690644 2522 log.go:172] (0xc000ad06e0) (1) Data frame handling\nI0520 00:30:54.690677 2522 log.go:172] (0xc000ad06e0) (1) Data frame sent\nI0520 00:30:54.690704 2522 log.go:172] (0xc000536fd0) (0xc000ad06e0) Stream removed, broadcasting: 1\nI0520 00:30:54.690748 2522 log.go:172] (0xc000536fd0) Go away received\nI0520 00:30:54.691019 2522 log.go:172] (0xc000536fd0) (0xc000ad06e0) Stream removed, broadcasting: 1\nI0520 00:30:54.691043 2522 log.go:172] (0xc000536fd0) (0xc0005e2640) Stream removed, broadcasting: 3\nI0520 00:30:54.691050 2522 log.go:172] (0xc000536fd0) (0xc0004d4e60) Stream removed, broadcasting: 5\n" May 20 00:30:54.695: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 20 00:30:54.695: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 20 00:30:54.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8682 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 20 00:30:55.024: INFO: stderr: "I0520 00:30:54.832348 2545 log.go:172] (0xc0009f16b0) (0xc0006626e0) Create stream\nI0520 00:30:54.832434 2545 log.go:172] (0xc0009f16b0) (0xc0006626e0) Stream added, broadcasting: 1\nI0520 00:30:54.843113 2545 log.go:172] (0xc0009f16b0) Reply frame received for 1\nI0520 00:30:54.843192 2545 log.go:172] (0xc0009f16b0) (0xc0006495e0) Create stream\nI0520 00:30:54.843242 2545 log.go:172] (0xc0009f16b0) (0xc0006495e0) Stream added, broadcasting: 3\nI0520 00:30:54.846026 2545 log.go:172] (0xc0009f16b0) Reply frame received for 3\nI0520 00:30:54.846049 2545 log.go:172] (0xc0009f16b0) (0xc00063cdc0) Create stream\nI0520 00:30:54.846056 2545 log.go:172] (0xc0009f16b0) (0xc00063cdc0) Stream added, broadcasting: 5\nI0520 00:30:54.846622 2545 log.go:172] (0xc0009f16b0) Reply frame received for 5\nI0520 00:30:54.980042 2545 log.go:172] (0xc0009f16b0) Data frame received for 5\nI0520 00:30:54.980071 2545 log.go:172] (0xc00063cdc0) (5) Data frame handling\nI0520 00:30:54.980090 2545 log.go:172] (0xc00063cdc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0520 00:30:55.018042 2545 log.go:172] (0xc0009f16b0) Data frame received for 3\nI0520 00:30:55.018079 2545 log.go:172] (0xc0006495e0) (3) Data frame handling\nI0520 00:30:55.018096 2545 log.go:172] (0xc0006495e0) (3) Data frame sent\nI0520 00:30:55.018310 2545 log.go:172] (0xc0009f16b0) Data frame received for 5\nI0520 00:30:55.018331 2545 log.go:172] (0xc00063cdc0) (5) Data frame handling\nI0520 00:30:55.018351 2545 log.go:172] (0xc0009f16b0) Data frame received for 3\nI0520 00:30:55.018357 2545 log.go:172] (0xc0006495e0) (3) Data frame handling\nI0520 00:30:55.019889 2545 log.go:172] (0xc0009f16b0) Data frame received for 1\nI0520 00:30:55.019906 2545 log.go:172] (0xc0006626e0) (1) Data frame handling\nI0520 00:30:55.019923 2545 log.go:172] (0xc0006626e0) (1) Data frame sent\nI0520 00:30:55.019998 2545 log.go:172] (0xc0009f16b0) (0xc0006626e0) Stream removed, broadcasting: 1\nI0520 00:30:55.020064 2545 log.go:172] (0xc0009f16b0) Go away received\nI0520 00:30:55.020314 2545 log.go:172] (0xc0009f16b0) (0xc0006626e0) Stream removed, broadcasting: 1\nI0520 00:30:55.020329 2545 log.go:172] (0xc0009f16b0) (0xc0006495e0) Stream removed, broadcasting: 3\nI0520 00:30:55.020338 2545 log.go:172] (0xc0009f16b0) (0xc00063cdc0) Stream removed, broadcasting: 5\n" May 20 00:30:55.024: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 20 00:30:55.024: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 20 00:30:55.024: INFO: Waiting for statefulset status.replicas updated to 0 May 20 00:30:55.038: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 20 00:31:05.044: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 20 00:31:05.044: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 20 00:31:05.044: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 20 00:31:05.072: INFO: POD NODE PHASE GRACE CONDITIONS May 20 00:31:05.072: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:12 +0000 UTC }] May 20 00:31:05.072: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC }] May 20 00:31:05.072: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC }] May 20 00:31:05.072: INFO: May 20 00:31:05.072: INFO: StatefulSet ss has not reached scale 0, at 3 May 20 00:31:06.273: INFO: POD NODE PHASE GRACE CONDITIONS May 20 00:31:06.273: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:12 +0000 UTC }] May 20 00:31:06.273: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC }] May 20 00:31:06.273: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC }] May 20 00:31:06.273: INFO: May 20 00:31:06.273: INFO: StatefulSet ss has not reached scale 0, at 3 May 20 00:31:07.311: INFO: POD NODE PHASE GRACE CONDITIONS May 20 00:31:07.311: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:12 +0000 UTC }] May 20 00:31:07.311: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC }] May 20 00:31:07.311: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC }] May 20 00:31:07.311: INFO: May 20 00:31:07.311: INFO: StatefulSet ss has not reached scale 0, at 3 May 20 00:31:08.315: INFO: POD NODE PHASE GRACE CONDITIONS May 20 00:31:08.315: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:12 +0000 UTC }] May 20 00:31:08.315: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC }] May 20 00:31:08.315: INFO: May 20 00:31:08.315: INFO: StatefulSet ss has not reached scale 0, at 2 May 20 00:31:09.321: INFO: POD NODE PHASE GRACE CONDITIONS May 20 00:31:09.321: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:12 +0000 UTC }] May 20 00:31:09.321: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC }] May 20 00:31:09.321: INFO: May 20 00:31:09.321: INFO: StatefulSet ss has not reached scale 0, at 2 May 20 00:31:10.327: INFO: POD NODE PHASE GRACE CONDITIONS May 20 00:31:10.327: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:12 +0000 UTC }] May 20 00:31:10.327: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC }] May 20 00:31:10.327: INFO: May 20 00:31:10.327: INFO: StatefulSet ss has not reached scale 0, at 2 May 20 00:31:11.331: INFO: POD NODE PHASE GRACE CONDITIONS May 20 00:31:11.331: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:12 +0000 UTC }] May 20 00:31:11.331: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC }] May 20 00:31:11.331: INFO: May 20 00:31:11.331: INFO: StatefulSet ss has not reached scale 0, at 2 May 20 00:31:12.337: INFO: POD NODE PHASE GRACE CONDITIONS May 20 00:31:12.337: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:12 +0000 UTC }] May 20 00:31:12.337: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC }] May 20 00:31:12.337: INFO: May 20 00:31:12.337: INFO: StatefulSet ss has not reached scale 0, at 2 May 20 00:31:13.342: INFO: POD NODE PHASE GRACE CONDITIONS May 20 00:31:13.342: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:12 +0000 UTC }] May 20 00:31:13.343: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC }] May 20 00:31:13.343: INFO: May 20 00:31:13.343: INFO: StatefulSet ss has not reached scale 0, at 2 May 20 00:31:14.347: INFO: POD NODE PHASE GRACE CONDITIONS May 20 00:31:14.347: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:12 +0000 UTC }] May 20 00:31:14.347: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 00:30:33 +0000 UTC }] May 20 00:31:14.347: INFO: May 20 00:31:14.347: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8682 May 20 00:31:15.351: INFO: Scaling statefulset ss to 0 May 20 00:31:15.360: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 20 00:31:15.362: INFO: Deleting all statefulset in ns statefulset-8682 May 20 00:31:15.364: INFO: Scaling statefulset ss to 0 May 20 00:31:15.372: INFO: Waiting for statefulset status.replicas updated to 0 May 20 00:31:15.375: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:31:15.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8682" for this suite. • [SLOW TEST:62.758 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":288,"completed":170,"skipped":2754,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:31:15.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:31:15.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4449" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":288,"completed":171,"skipped":2765,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:31:15.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:31:26.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6638" for this suite. • [SLOW TEST:11.243 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":288,"completed":172,"skipped":2799,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:31:26.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7785 STEP: creating a selector STEP: Creating the service pods in kubernetes May 20 00:31:26.861: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 20 00:31:26.986: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 00:31:29.152: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 00:31:30.990: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:31:32.990: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:31:34.990: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:31:36.990: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:31:38.996: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:31:40.989: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:31:42.994: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:31:44.989: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:31:46.996: INFO: The status of Pod netserver-0 is Running (Ready = true) May 20 00:31:47.001: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 20 00:31:51.066: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.184:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7785 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 00:31:51.066: INFO: >>> kubeConfig: /root/.kube/config I0520 00:31:51.101797 7 log.go:172] (0xc005108420) (0xc002086500) Create stream I0520 00:31:51.101826 7 log.go:172] (0xc005108420) (0xc002086500) Stream added, broadcasting: 1 I0520 00:31:51.103847 7 log.go:172] (0xc005108420) Reply frame received for 1 I0520 00:31:51.103891 7 log.go:172] (0xc005108420) (0xc0019120a0) Create stream I0520 00:31:51.103908 7 log.go:172] (0xc005108420) (0xc0019120a0) Stream added, broadcasting: 3 I0520 00:31:51.104957 7 log.go:172] (0xc005108420) Reply frame received for 3 I0520 00:31:51.105019 7 log.go:172] (0xc005108420) (0xc0020866e0) Create stream I0520 00:31:51.105044 7 log.go:172] (0xc005108420) (0xc0020866e0) Stream added, broadcasting: 5 I0520 00:31:51.106447 7 log.go:172] (0xc005108420) Reply frame received for 5 I0520 00:31:51.187809 7 log.go:172] (0xc005108420) Data frame received for 5 I0520 00:31:51.187835 7 log.go:172] (0xc0020866e0) (5) Data frame handling I0520 00:31:51.187854 7 log.go:172] (0xc005108420) Data frame received for 3 I0520 00:31:51.187863 7 log.go:172] (0xc0019120a0) (3) Data frame handling I0520 00:31:51.187871 7 log.go:172] (0xc0019120a0) (3) Data frame sent I0520 00:31:51.188153 7 log.go:172] (0xc005108420) Data frame received for 3 I0520 00:31:51.188198 7 log.go:172] (0xc0019120a0) (3) Data frame handling I0520 00:31:51.190148 7 log.go:172] (0xc005108420) Data frame received for 1 I0520 00:31:51.190169 7 log.go:172] (0xc002086500) (1) Data frame handling I0520 00:31:51.190179 7 log.go:172] (0xc002086500) (1) Data frame sent I0520 00:31:51.190297 7 log.go:172] (0xc005108420) (0xc002086500) Stream removed, broadcasting: 1 I0520 00:31:51.190343 7 log.go:172] (0xc005108420) Go away received I0520 00:31:51.190434 7 log.go:172] (0xc005108420) (0xc002086500) Stream removed, broadcasting: 1 I0520 00:31:51.190456 7 log.go:172] (0xc005108420) (0xc0019120a0) Stream removed, broadcasting: 3 I0520 00:31:51.190466 7 log.go:172] (0xc005108420) (0xc0020866e0) Stream removed, broadcasting: 5 May 20 00:31:51.190: INFO: Found all expected endpoints: [netserver-0] May 20 00:31:51.193: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.191:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7785 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 00:31:51.193: INFO: >>> kubeConfig: /root/.kube/config I0520 00:31:51.224379 7 log.go:172] (0xc000f08580) (0xc0018dad20) Create stream I0520 00:31:51.224420 7 log.go:172] (0xc000f08580) (0xc0018dad20) Stream added, broadcasting: 1 I0520 00:31:51.226319 7 log.go:172] (0xc000f08580) Reply frame received for 1 I0520 00:31:51.226346 7 log.go:172] (0xc000f08580) (0xc001ca86e0) Create stream I0520 00:31:51.226357 7 log.go:172] (0xc000f08580) (0xc001ca86e0) Stream added, broadcasting: 3 I0520 00:31:51.227105 7 log.go:172] (0xc000f08580) Reply frame received for 3 I0520 00:31:51.227147 7 log.go:172] (0xc000f08580) (0xc00216c460) Create stream I0520 00:31:51.227166 7 log.go:172] (0xc000f08580) (0xc00216c460) Stream added, broadcasting: 5 I0520 00:31:51.227979 7 log.go:172] (0xc000f08580) Reply frame received for 5 I0520 00:31:51.298747 7 log.go:172] (0xc000f08580) Data frame received for 5 I0520 00:31:51.298800 7 log.go:172] (0xc00216c460) (5) Data frame handling I0520 00:31:51.298836 7 log.go:172] (0xc000f08580) Data frame received for 3 I0520 00:31:51.298850 7 log.go:172] (0xc001ca86e0) (3) Data frame handling I0520 00:31:51.298865 7 log.go:172] (0xc001ca86e0) (3) Data frame sent I0520 00:31:51.298872 7 log.go:172] (0xc000f08580) Data frame received for 3 I0520 00:31:51.298881 7 log.go:172] (0xc001ca86e0) (3) Data frame handling I0520 00:31:51.300584 7 log.go:172] (0xc000f08580) Data frame received for 1 I0520 00:31:51.300608 7 log.go:172] (0xc0018dad20) (1) Data frame handling I0520 00:31:51.300632 7 log.go:172] (0xc0018dad20) (1) Data frame sent I0520 00:31:51.300652 7 log.go:172] (0xc000f08580) (0xc0018dad20) Stream removed, broadcasting: 1 I0520 00:31:51.300671 7 log.go:172] (0xc000f08580) Go away received I0520 00:31:51.300810 7 log.go:172] (0xc000f08580) (0xc0018dad20) Stream removed, broadcasting: 1 I0520 00:31:51.300841 7 log.go:172] (0xc000f08580) (0xc001ca86e0) Stream removed, broadcasting: 3 I0520 00:31:51.300854 7 log.go:172] (0xc000f08580) (0xc00216c460) Stream removed, broadcasting: 5 May 20 00:31:51.300: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:31:51.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7785" for this suite. • [SLOW TEST:24.564 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":173,"skipped":2820,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:31:51.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-c0c6d049-33f1-4fd4-9504-28a6d52e5f87 STEP: Creating a pod to test consume secrets May 20 00:31:51.399: INFO: Waiting up to 5m0s for pod "pod-secrets-4b65c360-7d5b-46a1-900d-3bb753f95dba" in namespace "secrets-8365" to be "Succeeded or Failed" May 20 00:31:51.410: INFO: Pod "pod-secrets-4b65c360-7d5b-46a1-900d-3bb753f95dba": Phase="Pending", Reason="", readiness=false. Elapsed: 11.017842ms May 20 00:31:53.415: INFO: Pod "pod-secrets-4b65c360-7d5b-46a1-900d-3bb753f95dba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01567414s May 20 00:31:55.419: INFO: Pod "pod-secrets-4b65c360-7d5b-46a1-900d-3bb753f95dba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020225134s STEP: Saw pod success May 20 00:31:55.419: INFO: Pod "pod-secrets-4b65c360-7d5b-46a1-900d-3bb753f95dba" satisfied condition "Succeeded or Failed" May 20 00:31:55.423: INFO: Trying to get logs from node latest-worker pod pod-secrets-4b65c360-7d5b-46a1-900d-3bb753f95dba container secret-volume-test: STEP: delete the pod May 20 00:31:55.472: INFO: Waiting for pod pod-secrets-4b65c360-7d5b-46a1-900d-3bb753f95dba to disappear May 20 00:31:55.486: INFO: Pod pod-secrets-4b65c360-7d5b-46a1-900d-3bb753f95dba no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:31:55.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8365" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":174,"skipped":2824,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:31:55.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-2c898faf-5abf-4d8d-88ed-9a63db3bf8cc STEP: Creating a pod to test consume configMaps May 20 00:31:55.572: INFO: Waiting up to 5m0s for pod "pod-configmaps-cbc0fe18-26da-4484-8d2d-613538e3af76" in namespace "configmap-1777" to be "Succeeded or Failed" May 20 00:31:55.586: INFO: Pod "pod-configmaps-cbc0fe18-26da-4484-8d2d-613538e3af76": Phase="Pending", Reason="", readiness=false. Elapsed: 13.769355ms May 20 00:31:57.739: INFO: Pod "pod-configmaps-cbc0fe18-26da-4484-8d2d-613538e3af76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167436763s May 20 00:31:59.787: INFO: Pod "pod-configmaps-cbc0fe18-26da-4484-8d2d-613538e3af76": Phase="Running", Reason="", readiness=true. Elapsed: 4.215536623s May 20 00:32:01.792: INFO: Pod "pod-configmaps-cbc0fe18-26da-4484-8d2d-613538e3af76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.22056466s STEP: Saw pod success May 20 00:32:01.792: INFO: Pod "pod-configmaps-cbc0fe18-26da-4484-8d2d-613538e3af76" satisfied condition "Succeeded or Failed" May 20 00:32:01.796: INFO: Trying to get logs from node latest-worker pod pod-configmaps-cbc0fe18-26da-4484-8d2d-613538e3af76 container configmap-volume-test: STEP: delete the pod May 20 00:32:01.818: INFO: Waiting for pod pod-configmaps-cbc0fe18-26da-4484-8d2d-613538e3af76 to disappear May 20 00:32:01.822: INFO: Pod pod-configmaps-cbc0fe18-26da-4484-8d2d-613538e3af76 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:32:01.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1777" for this suite. • [SLOW TEST:6.334 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":175,"skipped":2861,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:32:01.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token May 20 00:32:02.478: INFO: created pod pod-service-account-defaultsa May 20 00:32:02.478: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 20 00:32:02.488: INFO: created pod pod-service-account-mountsa May 20 00:32:02.488: INFO: pod pod-service-account-mountsa service account token volume mount: true May 20 00:32:02.510: INFO: created pod pod-service-account-nomountsa May 20 00:32:02.510: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 20 00:32:02.572: INFO: created pod pod-service-account-defaultsa-mountspec May 20 00:32:02.572: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 20 00:32:02.605: INFO: created pod pod-service-account-mountsa-mountspec May 20 00:32:02.605: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 20 00:32:02.737: INFO: created pod pod-service-account-nomountsa-mountspec May 20 00:32:02.737: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 20 00:32:02.751: INFO: created pod pod-service-account-defaultsa-nomountspec May 20 00:32:02.751: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 20 00:32:02.801: INFO: created pod pod-service-account-mountsa-nomountspec May 20 00:32:02.801: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 20 00:32:02.878: INFO: created pod pod-service-account-nomountsa-nomountspec May 20 00:32:02.878: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:32:02.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5776" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":288,"completed":176,"skipped":2890,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:32:03.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-zzgd STEP: Creating a pod to test atomic-volume-subpath May 20 00:32:03.213: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zzgd" in namespace "subpath-7842" to be "Succeeded or Failed" May 20 00:32:03.227: INFO: Pod "pod-subpath-test-configmap-zzgd": Phase="Pending", Reason="", readiness=false. Elapsed: 13.864304ms May 20 00:32:05.491: INFO: Pod "pod-subpath-test-configmap-zzgd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.278445933s May 20 00:32:07.576: INFO: Pod "pod-subpath-test-configmap-zzgd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3634154s May 20 00:32:09.836: INFO: Pod "pod-subpath-test-configmap-zzgd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.623099834s May 20 00:32:12.135: INFO: Pod "pod-subpath-test-configmap-zzgd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.921905078s May 20 00:32:14.250: INFO: Pod "pod-subpath-test-configmap-zzgd": Phase="Pending", Reason="", readiness=false. Elapsed: 11.037225825s May 20 00:32:16.260: INFO: Pod "pod-subpath-test-configmap-zzgd": Phase="Running", Reason="", readiness=true. Elapsed: 13.047004721s May 20 00:32:18.263: INFO: Pod "pod-subpath-test-configmap-zzgd": Phase="Running", Reason="", readiness=true. Elapsed: 15.049899883s May 20 00:32:20.268: INFO: Pod "pod-subpath-test-configmap-zzgd": Phase="Running", Reason="", readiness=true. Elapsed: 17.054697921s May 20 00:32:22.272: INFO: Pod "pod-subpath-test-configmap-zzgd": Phase="Running", Reason="", readiness=true. Elapsed: 19.059423507s May 20 00:32:24.277: INFO: Pod "pod-subpath-test-configmap-zzgd": Phase="Running", Reason="", readiness=true. Elapsed: 21.063997174s May 20 00:32:26.280: INFO: Pod "pod-subpath-test-configmap-zzgd": Phase="Running", Reason="", readiness=true. Elapsed: 23.066787786s May 20 00:32:28.283: INFO: Pod "pod-subpath-test-configmap-zzgd": Phase="Running", Reason="", readiness=true. Elapsed: 25.069673985s May 20 00:32:30.286: INFO: Pod "pod-subpath-test-configmap-zzgd": Phase="Running", Reason="", readiness=true. Elapsed: 27.07350146s May 20 00:32:32.291: INFO: Pod "pod-subpath-test-configmap-zzgd": Phase="Running", Reason="", readiness=true. Elapsed: 29.077898471s May 20 00:32:34.296: INFO: Pod "pod-subpath-test-configmap-zzgd": Phase="Running", Reason="", readiness=true. Elapsed: 31.082644227s May 20 00:32:36.299: INFO: Pod "pod-subpath-test-configmap-zzgd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.086172865s STEP: Saw pod success May 20 00:32:36.299: INFO: Pod "pod-subpath-test-configmap-zzgd" satisfied condition "Succeeded or Failed" May 20 00:32:36.302: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-zzgd container test-container-subpath-configmap-zzgd: STEP: delete the pod May 20 00:32:36.322: INFO: Waiting for pod pod-subpath-test-configmap-zzgd to disappear May 20 00:32:36.326: INFO: Pod pod-subpath-test-configmap-zzgd no longer exists STEP: Deleting pod pod-subpath-test-configmap-zzgd May 20 00:32:36.326: INFO: Deleting pod "pod-subpath-test-configmap-zzgd" in namespace "subpath-7842" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:32:36.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7842" for this suite. • [SLOW TEST:33.313 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":288,"completed":177,"skipped":2904,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:32:36.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 20 00:32:40.974: INFO: Successfully updated pod "pod-update-a996ecad-726a-4495-a418-78e0e7ab02ac" STEP: verifying the updated pod is in kubernetes May 20 00:32:41.000: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:32:41.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9181" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":288,"completed":178,"skipped":2948,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:32:41.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6865.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6865.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6865.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6865.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6865.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6865.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6865.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6865.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6865.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6865.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 20 00:32:47.215: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:47.220: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:47.223: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:47.226: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:47.233: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:47.236: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:47.239: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:47.244: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:47.251: INFO: Lookups using dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6865.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6865.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local jessie_udp@dns-test-service-2.dns-6865.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6865.svc.cluster.local] May 20 00:32:52.257: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:52.261: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:52.264: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:52.267: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:52.275: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:52.278: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:52.281: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:52.284: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:52.290: INFO: Lookups using dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6865.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6865.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local jessie_udp@dns-test-service-2.dns-6865.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6865.svc.cluster.local] May 20 00:32:57.257: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:57.260: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:57.262: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:57.265: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:57.272: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:57.274: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:57.276: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:57.278: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:32:57.284: INFO: Lookups using dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6865.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6865.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local jessie_udp@dns-test-service-2.dns-6865.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6865.svc.cluster.local] May 20 00:33:02.257: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:02.261: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:02.268: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:02.271: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:02.279: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:02.282: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:02.284: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:02.286: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:02.291: INFO: Lookups using dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6865.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6865.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local jessie_udp@dns-test-service-2.dns-6865.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6865.svc.cluster.local] May 20 00:33:07.256: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:07.259: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:07.262: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:07.266: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:07.276: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:07.279: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:07.282: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:07.285: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:07.292: INFO: Lookups using dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6865.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6865.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local jessie_udp@dns-test-service-2.dns-6865.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6865.svc.cluster.local] May 20 00:33:12.256: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:12.259: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:12.262: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:12.265: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:12.274: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:12.276: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:12.279: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:12.282: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6865.svc.cluster.local from pod dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad: the server could not find the requested resource (get pods dns-test-e7d65e46-3238-4494-950b-b9b3db290bad) May 20 00:33:12.287: INFO: Lookups using dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6865.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6865.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6865.svc.cluster.local jessie_udp@dns-test-service-2.dns-6865.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6865.svc.cluster.local] May 20 00:33:17.292: INFO: DNS probes using dns-6865/dns-test-e7d65e46-3238-4494-950b-b9b3db290bad succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:33:17.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6865" for this suite. • [SLOW TEST:36.462 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":288,"completed":179,"skipped":2978,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:33:17.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3368 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3368;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3368 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3368;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3368.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3368.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3368.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3368.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3368.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3368.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3368.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3368.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3368.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3368.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3368.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3368.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3368.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 83.209.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.209.83_udp@PTR;check="$$(dig +tcp +noall +answer +search 83.209.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.209.83_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3368 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3368;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3368 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3368;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3368.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3368.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3368.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3368.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3368.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3368.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3368.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3368.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3368.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3368.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3368.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3368.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3368.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 83.209.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.209.83_udp@PTR;check="$$(dig +tcp +noall +answer +search 83.209.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.209.83_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 20 00:33:24.474: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:24.478: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:24.480: INFO: Unable to read wheezy_udp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:24.483: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:24.485: INFO: Unable to read wheezy_udp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:24.487: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:24.489: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:24.492: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:24.509: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:24.512: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:24.514: INFO: Unable to read jessie_udp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:24.516: INFO: Unable to read jessie_tcp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:24.519: INFO: Unable to read jessie_udp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:24.522: INFO: Unable to read jessie_tcp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:24.525: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:24.530: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:24.585: INFO: Lookups using dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3368 wheezy_tcp@dns-test-service.dns-3368 wheezy_udp@dns-test-service.dns-3368.svc wheezy_tcp@dns-test-service.dns-3368.svc wheezy_udp@_http._tcp.dns-test-service.dns-3368.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3368.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3368 jessie_tcp@dns-test-service.dns-3368 jessie_udp@dns-test-service.dns-3368.svc jessie_tcp@dns-test-service.dns-3368.svc jessie_udp@_http._tcp.dns-test-service.dns-3368.svc jessie_tcp@_http._tcp.dns-test-service.dns-3368.svc] May 20 00:33:29.590: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:29.594: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:29.597: INFO: Unable to read wheezy_udp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:29.599: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:29.603: INFO: Unable to read wheezy_udp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:29.606: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:29.608: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:29.611: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:29.632: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:29.635: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:29.638: INFO: Unable to read jessie_udp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:29.641: INFO: Unable to read jessie_tcp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:29.644: INFO: Unable to read jessie_udp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:29.647: INFO: Unable to read jessie_tcp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:29.650: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:29.653: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:29.671: INFO: Lookups using dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3368 wheezy_tcp@dns-test-service.dns-3368 wheezy_udp@dns-test-service.dns-3368.svc wheezy_tcp@dns-test-service.dns-3368.svc wheezy_udp@_http._tcp.dns-test-service.dns-3368.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3368.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3368 jessie_tcp@dns-test-service.dns-3368 jessie_udp@dns-test-service.dns-3368.svc jessie_tcp@dns-test-service.dns-3368.svc jessie_udp@_http._tcp.dns-test-service.dns-3368.svc jessie_tcp@_http._tcp.dns-test-service.dns-3368.svc] May 20 00:33:34.591: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:34.595: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:34.597: INFO: Unable to read wheezy_udp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:34.600: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:34.602: INFO: Unable to read wheezy_udp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:34.605: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:34.607: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:34.610: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:34.922: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:34.924: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:34.926: INFO: Unable to read jessie_udp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:34.927: INFO: Unable to read jessie_tcp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:34.929: INFO: Unable to read jessie_udp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:34.931: INFO: Unable to read jessie_tcp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:34.933: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:35.034: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:35.052: INFO: Lookups using dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3368 wheezy_tcp@dns-test-service.dns-3368 wheezy_udp@dns-test-service.dns-3368.svc wheezy_tcp@dns-test-service.dns-3368.svc wheezy_udp@_http._tcp.dns-test-service.dns-3368.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3368.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3368 jessie_tcp@dns-test-service.dns-3368 jessie_udp@dns-test-service.dns-3368.svc jessie_tcp@dns-test-service.dns-3368.svc jessie_udp@_http._tcp.dns-test-service.dns-3368.svc jessie_tcp@_http._tcp.dns-test-service.dns-3368.svc] May 20 00:33:39.591: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:39.595: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:39.598: INFO: Unable to read wheezy_udp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:39.602: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:39.604: INFO: Unable to read wheezy_udp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:39.607: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:39.610: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:39.613: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:39.636: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:39.639: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:39.643: INFO: Unable to read jessie_udp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:39.646: INFO: Unable to read jessie_tcp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:39.650: INFO: Unable to read jessie_udp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:39.652: INFO: Unable to read jessie_tcp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:39.656: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:39.659: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:39.706: INFO: Lookups using dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3368 wheezy_tcp@dns-test-service.dns-3368 wheezy_udp@dns-test-service.dns-3368.svc wheezy_tcp@dns-test-service.dns-3368.svc wheezy_udp@_http._tcp.dns-test-service.dns-3368.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3368.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3368 jessie_tcp@dns-test-service.dns-3368 jessie_udp@dns-test-service.dns-3368.svc jessie_tcp@dns-test-service.dns-3368.svc jessie_udp@_http._tcp.dns-test-service.dns-3368.svc jessie_tcp@_http._tcp.dns-test-service.dns-3368.svc] May 20 00:33:44.591: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:44.596: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:44.599: INFO: Unable to read wheezy_udp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:44.602: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:44.605: INFO: Unable to read wheezy_udp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:44.608: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:44.612: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:44.618: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:44.638: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:44.640: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:44.642: INFO: Unable to read jessie_udp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:44.645: INFO: Unable to read jessie_tcp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:44.647: INFO: Unable to read jessie_udp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:44.648: INFO: Unable to read jessie_tcp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:44.650: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:44.652: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:44.665: INFO: Lookups using dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3368 wheezy_tcp@dns-test-service.dns-3368 wheezy_udp@dns-test-service.dns-3368.svc wheezy_tcp@dns-test-service.dns-3368.svc wheezy_udp@_http._tcp.dns-test-service.dns-3368.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3368.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3368 jessie_tcp@dns-test-service.dns-3368 jessie_udp@dns-test-service.dns-3368.svc jessie_tcp@dns-test-service.dns-3368.svc jessie_udp@_http._tcp.dns-test-service.dns-3368.svc jessie_tcp@_http._tcp.dns-test-service.dns-3368.svc] May 20 00:33:49.590: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:49.592: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:49.595: INFO: Unable to read wheezy_udp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:49.598: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:49.601: INFO: Unable to read wheezy_udp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:49.603: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:49.606: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:49.608: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:49.627: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:49.630: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:49.632: INFO: Unable to read jessie_udp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:49.635: INFO: Unable to read jessie_tcp@dns-test-service.dns-3368 from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:49.638: INFO: Unable to read jessie_udp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:49.642: INFO: Unable to read jessie_tcp@dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:49.644: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:49.648: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3368.svc from pod dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f: the server could not find the requested resource (get pods dns-test-4b2795ac-f894-4199-9666-2400f059ce7f) May 20 00:33:49.667: INFO: Lookups using dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3368 wheezy_tcp@dns-test-service.dns-3368 wheezy_udp@dns-test-service.dns-3368.svc wheezy_tcp@dns-test-service.dns-3368.svc wheezy_udp@_http._tcp.dns-test-service.dns-3368.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3368.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3368 jessie_tcp@dns-test-service.dns-3368 jessie_udp@dns-test-service.dns-3368.svc jessie_tcp@dns-test-service.dns-3368.svc jessie_udp@_http._tcp.dns-test-service.dns-3368.svc jessie_tcp@_http._tcp.dns-test-service.dns-3368.svc] May 20 00:33:54.668: INFO: DNS probes using dns-3368/dns-test-4b2795ac-f894-4199-9666-2400f059ce7f succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:33:55.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3368" for this suite. • [SLOW TEST:38.046 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":288,"completed":180,"skipped":3013,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:33:55.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:34:11.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7978" for this suite. • [SLOW TEST:16.291 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":288,"completed":181,"skipped":3036,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:34:11.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:34:15.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-853" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":182,"skipped":3043,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:34:15.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:34:16.034: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-331 I0520 00:34:16.072062 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-331, replica count: 1 I0520 00:34:17.122420 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 00:34:18.122662 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 00:34:19.122869 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 00:34:20.123079 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 00:34:20.254: INFO: Created: latency-svc-rbsjv May 20 00:34:20.287: INFO: Got endpoints: latency-svc-rbsjv [64.260488ms] May 20 00:34:20.442: INFO: Created: latency-svc-mw5kw May 20 00:34:20.468: INFO: Got endpoints: latency-svc-mw5kw [180.557725ms] May 20 00:34:20.469: INFO: Created: latency-svc-899lq May 20 00:34:20.480: INFO: Got endpoints: latency-svc-899lq [192.744137ms] May 20 00:34:20.527: INFO: Created: latency-svc-fqptf May 20 00:34:20.534: INFO: Got endpoints: latency-svc-fqptf [246.582339ms] May 20 00:34:20.591: INFO: Created: latency-svc-hwdpj May 20 00:34:20.606: INFO: Got endpoints: latency-svc-hwdpj [318.989272ms] May 20 00:34:20.630: INFO: Created: latency-svc-52q4c May 20 00:34:20.644: INFO: Got endpoints: latency-svc-52q4c [356.126391ms] May 20 00:34:20.660: INFO: Created: latency-svc-q88wn May 20 00:34:20.674: INFO: Got endpoints: latency-svc-q88wn [386.758466ms] May 20 00:34:20.691: INFO: Created: latency-svc-djz9c May 20 00:34:20.772: INFO: Got endpoints: latency-svc-djz9c [484.220202ms] May 20 00:34:20.778: INFO: Created: latency-svc-v7rld May 20 00:34:20.789: INFO: Got endpoints: latency-svc-v7rld [501.021176ms] May 20 00:34:20.804: INFO: Created: latency-svc-85vcg May 20 00:34:20.819: INFO: Got endpoints: latency-svc-85vcg [531.648723ms] May 20 00:34:20.835: INFO: Created: latency-svc-llmmd May 20 00:34:20.860: INFO: Got endpoints: latency-svc-llmmd [572.315129ms] May 20 00:34:20.920: INFO: Created: latency-svc-n7ncx May 20 00:34:20.928: INFO: Got endpoints: latency-svc-n7ncx [640.418061ms] May 20 00:34:20.960: INFO: Created: latency-svc-57xtd May 20 00:34:20.976: INFO: Got endpoints: latency-svc-57xtd [688.251705ms] May 20 00:34:20.996: INFO: Created: latency-svc-p49bz May 20 00:34:21.009: INFO: Got endpoints: latency-svc-p49bz [721.81346ms] May 20 00:34:21.082: INFO: Created: latency-svc-pbkkx May 20 00:34:21.086: INFO: Got endpoints: latency-svc-pbkkx [799.001043ms] May 20 00:34:21.124: INFO: Created: latency-svc-fvct2 May 20 00:34:21.140: INFO: Got endpoints: latency-svc-fvct2 [852.797194ms] May 20 00:34:21.170: INFO: Created: latency-svc-sp66m May 20 00:34:21.237: INFO: Got endpoints: latency-svc-sp66m [768.888454ms] May 20 00:34:21.260: INFO: Created: latency-svc-vbp6q May 20 00:34:21.271: INFO: Got endpoints: latency-svc-vbp6q [790.970256ms] May 20 00:34:21.321: INFO: Created: latency-svc-tzvgm May 20 00:34:21.381: INFO: Got endpoints: latency-svc-tzvgm [847.151084ms] May 20 00:34:21.393: INFO: Created: latency-svc-7hvxm May 20 00:34:21.410: INFO: Created: latency-svc-bvclw May 20 00:34:21.411: INFO: Got endpoints: latency-svc-7hvxm [804.341137ms] May 20 00:34:21.434: INFO: Got endpoints: latency-svc-bvclw [790.647887ms] May 20 00:34:21.464: INFO: Created: latency-svc-4hr4s May 20 00:34:21.477: INFO: Got endpoints: latency-svc-4hr4s [802.786852ms] May 20 00:34:21.554: INFO: Created: latency-svc-g27x4 May 20 00:34:21.591: INFO: Got endpoints: latency-svc-g27x4 [818.943227ms] May 20 00:34:21.729: INFO: Created: latency-svc-ztmls May 20 00:34:21.732: INFO: Got endpoints: latency-svc-ztmls [943.483495ms] May 20 00:34:21.788: INFO: Created: latency-svc-gkr6p May 20 00:34:21.802: INFO: Got endpoints: latency-svc-gkr6p [982.696748ms] May 20 00:34:21.824: INFO: Created: latency-svc-rfgfr May 20 00:34:21.890: INFO: Got endpoints: latency-svc-rfgfr [1.030312124s] May 20 00:34:21.897: INFO: Created: latency-svc-pwd4t May 20 00:34:21.904: INFO: Got endpoints: latency-svc-pwd4t [975.784263ms] May 20 00:34:21.926: INFO: Created: latency-svc-6lfzd May 20 00:34:21.940: INFO: Got endpoints: latency-svc-6lfzd [964.125722ms] May 20 00:34:21.961: INFO: Created: latency-svc-7rfr9 May 20 00:34:21.970: INFO: Got endpoints: latency-svc-7rfr9 [961.047422ms] May 20 00:34:21.986: INFO: Created: latency-svc-mhpj4 May 20 00:34:22.053: INFO: Got endpoints: latency-svc-mhpj4 [966.55305ms] May 20 00:34:22.088: INFO: Created: latency-svc-wlfq5 May 20 00:34:22.091: INFO: Got endpoints: latency-svc-wlfq5 [951.27668ms] May 20 00:34:22.143: INFO: Created: latency-svc-xj7hd May 20 00:34:22.151: INFO: Got endpoints: latency-svc-xj7hd [913.709251ms] May 20 00:34:22.204: INFO: Created: latency-svc-z99nk May 20 00:34:22.211: INFO: Got endpoints: latency-svc-z99nk [939.893652ms] May 20 00:34:22.232: INFO: Created: latency-svc-lqfwv May 20 00:34:22.256: INFO: Got endpoints: latency-svc-lqfwv [874.925353ms] May 20 00:34:22.287: INFO: Created: latency-svc-2plnf May 20 00:34:22.296: INFO: Got endpoints: latency-svc-2plnf [884.949025ms] May 20 00:34:22.366: INFO: Created: latency-svc-25mkn May 20 00:34:22.401: INFO: Got endpoints: latency-svc-25mkn [966.247428ms] May 20 00:34:22.431: INFO: Created: latency-svc-9b6kq May 20 00:34:22.441: INFO: Got endpoints: latency-svc-9b6kq [963.689716ms] May 20 00:34:22.460: INFO: Created: latency-svc-4kd8t May 20 00:34:22.537: INFO: Got endpoints: latency-svc-4kd8t [946.758928ms] May 20 00:34:22.549: INFO: Created: latency-svc-497wc May 20 00:34:22.568: INFO: Got endpoints: latency-svc-497wc [836.128478ms] May 20 00:34:22.568: INFO: Created: latency-svc-2bn4k May 20 00:34:22.593: INFO: Got endpoints: latency-svc-2bn4k [791.496275ms] May 20 00:34:22.619: INFO: Created: latency-svc-c44nd May 20 00:34:22.627: INFO: Got endpoints: latency-svc-c44nd [736.507451ms] May 20 00:34:22.705: INFO: Created: latency-svc-cvrnc May 20 00:34:22.718: INFO: Got endpoints: latency-svc-cvrnc [814.875266ms] May 20 00:34:22.748: INFO: Created: latency-svc-jq56j May 20 00:34:22.759: INFO: Got endpoints: latency-svc-jq56j [818.86395ms] May 20 00:34:22.866: INFO: Created: latency-svc-2hgx5 May 20 00:34:22.872: INFO: Got endpoints: latency-svc-2hgx5 [901.280338ms] May 20 00:34:22.923: INFO: Created: latency-svc-h94qs May 20 00:34:22.934: INFO: Got endpoints: latency-svc-h94qs [880.716408ms] May 20 00:34:22.961: INFO: Created: latency-svc-d77qx May 20 00:34:23.040: INFO: Got endpoints: latency-svc-d77qx [948.437498ms] May 20 00:34:23.042: INFO: Created: latency-svc-m68dr May 20 00:34:23.060: INFO: Got endpoints: latency-svc-m68dr [909.070347ms] May 20 00:34:23.120: INFO: Created: latency-svc-wfqlm May 20 00:34:23.201: INFO: Got endpoints: latency-svc-wfqlm [990.343947ms] May 20 00:34:23.207: INFO: Created: latency-svc-svrhg May 20 00:34:23.237: INFO: Got endpoints: latency-svc-svrhg [981.215191ms] May 20 00:34:23.271: INFO: Created: latency-svc-fgtb7 May 20 00:34:23.351: INFO: Got endpoints: latency-svc-fgtb7 [1.055200128s] May 20 00:34:23.373: INFO: Created: latency-svc-g79x5 May 20 00:34:23.385: INFO: Got endpoints: latency-svc-g79x5 [984.093819ms] May 20 00:34:23.427: INFO: Created: latency-svc-x8dn7 May 20 00:34:23.439: INFO: Got endpoints: latency-svc-x8dn7 [998.501339ms] May 20 00:34:23.502: INFO: Created: latency-svc-jzdvf May 20 00:34:23.523: INFO: Got endpoints: latency-svc-jzdvf [985.024798ms] May 20 00:34:23.523: INFO: Created: latency-svc-q4k5z May 20 00:34:23.535: INFO: Got endpoints: latency-svc-q4k5z [966.615017ms] May 20 00:34:23.554: INFO: Created: latency-svc-vkjpd May 20 00:34:23.566: INFO: Got endpoints: latency-svc-vkjpd [972.470066ms] May 20 00:34:23.601: INFO: Created: latency-svc-jml67 May 20 00:34:23.663: INFO: Got endpoints: latency-svc-jml67 [1.036457099s] May 20 00:34:23.679: INFO: Created: latency-svc-zzddq May 20 00:34:23.721: INFO: Got endpoints: latency-svc-zzddq [1.002076608s] May 20 00:34:23.757: INFO: Created: latency-svc-npjmn May 20 00:34:23.830: INFO: Got endpoints: latency-svc-npjmn [1.071571393s] May 20 00:34:23.836: INFO: Created: latency-svc-t7c76 May 20 00:34:23.846: INFO: Got endpoints: latency-svc-t7c76 [974.568443ms] May 20 00:34:23.871: INFO: Created: latency-svc-h4nzn May 20 00:34:23.885: INFO: Got endpoints: latency-svc-h4nzn [951.197188ms] May 20 00:34:23.913: INFO: Created: latency-svc-dhbsw May 20 00:34:23.980: INFO: Got endpoints: latency-svc-dhbsw [940.428857ms] May 20 00:34:23.985: INFO: Created: latency-svc-s9w2t May 20 00:34:24.003: INFO: Got endpoints: latency-svc-s9w2t [943.216721ms] May 20 00:34:24.027: INFO: Created: latency-svc-f5gmk May 20 00:34:24.036: INFO: Got endpoints: latency-svc-f5gmk [834.519839ms] May 20 00:34:24.057: INFO: Created: latency-svc-mrdnt May 20 00:34:24.066: INFO: Got endpoints: latency-svc-mrdnt [829.203848ms] May 20 00:34:24.162: INFO: Created: latency-svc-zpwh5 May 20 00:34:24.175: INFO: Got endpoints: latency-svc-zpwh5 [823.712535ms] May 20 00:34:24.200: INFO: Created: latency-svc-zghqh May 20 00:34:24.231: INFO: Got endpoints: latency-svc-zghqh [845.92015ms] May 20 00:34:24.255: INFO: Created: latency-svc-ln8d7 May 20 00:34:24.339: INFO: Got endpoints: latency-svc-ln8d7 [899.907377ms] May 20 00:34:24.341: INFO: Created: latency-svc-hbx7g May 20 00:34:24.355: INFO: Got endpoints: latency-svc-hbx7g [832.446274ms] May 20 00:34:24.393: INFO: Created: latency-svc-b9qw8 May 20 00:34:24.398: INFO: Got endpoints: latency-svc-b9qw8 [862.445796ms] May 20 00:34:24.417: INFO: Created: latency-svc-2c2wf May 20 00:34:24.435: INFO: Got endpoints: latency-svc-2c2wf [868.774904ms] May 20 00:34:24.525: INFO: Created: latency-svc-w8pbk May 20 00:34:24.529: INFO: Got endpoints: latency-svc-w8pbk [865.64938ms] May 20 00:34:24.585: INFO: Created: latency-svc-4dk8h May 20 00:34:24.596: INFO: Got endpoints: latency-svc-4dk8h [875.62019ms] May 20 00:34:24.615: INFO: Created: latency-svc-wvlgm May 20 00:34:24.693: INFO: Got endpoints: latency-svc-wvlgm [862.358051ms] May 20 00:34:24.700: INFO: Created: latency-svc-r9zjt May 20 00:34:24.705: INFO: Got endpoints: latency-svc-r9zjt [858.999201ms] May 20 00:34:24.722: INFO: Created: latency-svc-d8k94 May 20 00:34:24.747: INFO: Got endpoints: latency-svc-d8k94 [862.039932ms] May 20 00:34:24.783: INFO: Created: latency-svc-sdml6 May 20 00:34:24.848: INFO: Got endpoints: latency-svc-sdml6 [867.654546ms] May 20 00:34:24.867: INFO: Created: latency-svc-xw72r May 20 00:34:24.885: INFO: Got endpoints: latency-svc-xw72r [881.826151ms] May 20 00:34:24.909: INFO: Created: latency-svc-lj9d2 May 20 00:34:24.931: INFO: Got endpoints: latency-svc-lj9d2 [895.164345ms] May 20 00:34:25.004: INFO: Created: latency-svc-rsnqx May 20 00:34:25.015: INFO: Got endpoints: latency-svc-rsnqx [948.094845ms] May 20 00:34:25.052: INFO: Created: latency-svc-zbpvd May 20 00:34:25.063: INFO: Got endpoints: latency-svc-zbpvd [888.551221ms] May 20 00:34:25.089: INFO: Created: latency-svc-5bg4k May 20 00:34:25.579: INFO: Got endpoints: latency-svc-5bg4k [1.348097803s] May 20 00:34:25.583: INFO: Created: latency-svc-dpk89 May 20 00:34:25.591: INFO: Got endpoints: latency-svc-dpk89 [1.251280526s] May 20 00:34:25.623: INFO: Created: latency-svc-wmhln May 20 00:34:25.648: INFO: Got endpoints: latency-svc-wmhln [1.292970657s] May 20 00:34:25.734: INFO: Created: latency-svc-ddk9v May 20 00:34:25.761: INFO: Got endpoints: latency-svc-ddk9v [1.363540364s] May 20 00:34:25.784: INFO: Created: latency-svc-4rnlr May 20 00:34:25.796: INFO: Got endpoints: latency-svc-4rnlr [1.360846337s] May 20 00:34:25.815: INFO: Created: latency-svc-6csc2 May 20 00:34:25.826: INFO: Got endpoints: latency-svc-6csc2 [1.297321069s] May 20 00:34:25.884: INFO: Created: latency-svc-d2kjq May 20 00:34:25.893: INFO: Got endpoints: latency-svc-d2kjq [1.296924185s] May 20 00:34:25.917: INFO: Created: latency-svc-br8rv May 20 00:34:25.931: INFO: Got endpoints: latency-svc-br8rv [1.238235145s] May 20 00:34:25.948: INFO: Created: latency-svc-hp5jc May 20 00:34:25.959: INFO: Got endpoints: latency-svc-hp5jc [1.253712544s] May 20 00:34:26.040: INFO: Created: latency-svc-kv8jc May 20 00:34:26.045: INFO: Got endpoints: latency-svc-kv8jc [1.297629539s] May 20 00:34:26.097: INFO: Created: latency-svc-98bm2 May 20 00:34:26.122: INFO: Got endpoints: latency-svc-98bm2 [1.273862926s] May 20 00:34:26.195: INFO: Created: latency-svc-9rdwv May 20 00:34:26.205: INFO: Got endpoints: latency-svc-9rdwv [1.319921774s] May 20 00:34:26.241: INFO: Created: latency-svc-lm626 May 20 00:34:26.254: INFO: Got endpoints: latency-svc-lm626 [1.322694398s] May 20 00:34:26.270: INFO: Created: latency-svc-4hbsm May 20 00:34:26.285: INFO: Got endpoints: latency-svc-4hbsm [1.270166963s] May 20 00:34:26.357: INFO: Created: latency-svc-h6zpm May 20 00:34:26.369: INFO: Got endpoints: latency-svc-h6zpm [1.305332502s] May 20 00:34:26.409: INFO: Created: latency-svc-dzlvr May 20 00:34:26.451: INFO: Got endpoints: latency-svc-dzlvr [872.006761ms] May 20 00:34:26.534: INFO: Created: latency-svc-tp4k7 May 20 00:34:26.546: INFO: Got endpoints: latency-svc-tp4k7 [955.843107ms] May 20 00:34:26.582: INFO: Created: latency-svc-p9fp5 May 20 00:34:26.595: INFO: Got endpoints: latency-svc-p9fp5 [946.49762ms] May 20 00:34:26.675: INFO: Created: latency-svc-vrgms May 20 00:34:26.678: INFO: Got endpoints: latency-svc-vrgms [916.327011ms] May 20 00:34:26.703: INFO: Created: latency-svc-vktqp May 20 00:34:26.726: INFO: Got endpoints: latency-svc-vktqp [930.507695ms] May 20 00:34:26.750: INFO: Created: latency-svc-55tbh May 20 00:34:26.837: INFO: Got endpoints: latency-svc-55tbh [1.010922041s] May 20 00:34:26.838: INFO: Created: latency-svc-zfcbx May 20 00:34:26.848: INFO: Got endpoints: latency-svc-zfcbx [954.736418ms] May 20 00:34:26.895: INFO: Created: latency-svc-j526r May 20 00:34:26.908: INFO: Got endpoints: latency-svc-j526r [976.674122ms] May 20 00:34:26.926: INFO: Created: latency-svc-rzlw4 May 20 00:34:27.004: INFO: Got endpoints: latency-svc-rzlw4 [1.044550761s] May 20 00:34:27.006: INFO: Created: latency-svc-zjjh9 May 20 00:34:27.010: INFO: Got endpoints: latency-svc-zjjh9 [965.452572ms] May 20 00:34:27.057: INFO: Created: latency-svc-8dwlx May 20 00:34:27.071: INFO: Got endpoints: latency-svc-8dwlx [949.334973ms] May 20 00:34:27.148: INFO: Created: latency-svc-cxmqs May 20 00:34:27.171: INFO: Got endpoints: latency-svc-cxmqs [965.683559ms] May 20 00:34:27.237: INFO: Created: latency-svc-286r2 May 20 00:34:27.315: INFO: Got endpoints: latency-svc-286r2 [1.061159632s] May 20 00:34:27.317: INFO: Created: latency-svc-44hx5 May 20 00:34:27.324: INFO: Got endpoints: latency-svc-44hx5 [1.038971929s] May 20 00:34:27.351: INFO: Created: latency-svc-gx8nd May 20 00:34:27.366: INFO: Got endpoints: latency-svc-gx8nd [997.343866ms] May 20 00:34:27.386: INFO: Created: latency-svc-bt5sd May 20 00:34:27.471: INFO: Got endpoints: latency-svc-bt5sd [1.019748101s] May 20 00:34:27.489: INFO: Created: latency-svc-qzgv5 May 20 00:34:27.510: INFO: Got endpoints: latency-svc-qzgv5 [963.955682ms] May 20 00:34:27.555: INFO: Created: latency-svc-9jhms May 20 00:34:27.565: INFO: Got endpoints: latency-svc-9jhms [970.15266ms] May 20 00:34:27.615: INFO: Created: latency-svc-9gxgg May 20 00:34:27.638: INFO: Got endpoints: latency-svc-9gxgg [960.674059ms] May 20 00:34:27.639: INFO: Created: latency-svc-qbvnb May 20 00:34:27.657: INFO: Got endpoints: latency-svc-qbvnb [930.641142ms] May 20 00:34:27.688: INFO: Created: latency-svc-ww5wg May 20 00:34:27.711: INFO: Got endpoints: latency-svc-ww5wg [873.523889ms] May 20 00:34:27.800: INFO: Created: latency-svc-cqqpk May 20 00:34:27.810: INFO: Got endpoints: latency-svc-cqqpk [961.595084ms] May 20 00:34:27.837: INFO: Created: latency-svc-kkszq May 20 00:34:27.851: INFO: Got endpoints: latency-svc-kkszq [943.400083ms] May 20 00:34:27.880: INFO: Created: latency-svc-ng9fb May 20 00:34:27.895: INFO: Got endpoints: latency-svc-ng9fb [890.768128ms] May 20 00:34:27.945: INFO: Created: latency-svc-fcjxm May 20 00:34:27.981: INFO: Got endpoints: latency-svc-fcjxm [970.124581ms] May 20 00:34:27.981: INFO: Created: latency-svc-bw4vb May 20 00:34:28.005: INFO: Got endpoints: latency-svc-bw4vb [933.371459ms] May 20 00:34:28.035: INFO: Created: latency-svc-c4f75 May 20 00:34:28.105: INFO: Got endpoints: latency-svc-c4f75 [934.509007ms] May 20 00:34:28.120: INFO: Created: latency-svc-h2wz5 May 20 00:34:28.128: INFO: Got endpoints: latency-svc-h2wz5 [813.321366ms] May 20 00:34:28.148: INFO: Created: latency-svc-lgvbf May 20 00:34:28.159: INFO: Got endpoints: latency-svc-lgvbf [835.11547ms] May 20 00:34:28.256: INFO: Created: latency-svc-9nqsf May 20 00:34:28.282: INFO: Created: latency-svc-5tzxt May 20 00:34:28.282: INFO: Got endpoints: latency-svc-9nqsf [915.774571ms] May 20 00:34:28.311: INFO: Got endpoints: latency-svc-5tzxt [840.449558ms] May 20 00:34:28.399: INFO: Created: latency-svc-qs2b2 May 20 00:34:28.418: INFO: Got endpoints: latency-svc-qs2b2 [907.930935ms] May 20 00:34:28.448: INFO: Created: latency-svc-6kwr9 May 20 00:34:28.461: INFO: Got endpoints: latency-svc-6kwr9 [895.690583ms] May 20 00:34:28.557: INFO: Created: latency-svc-7l9pn May 20 00:34:28.560: INFO: Got endpoints: latency-svc-7l9pn [921.73289ms] May 20 00:34:28.604: INFO: Created: latency-svc-gkbgw May 20 00:34:28.628: INFO: Got endpoints: latency-svc-gkbgw [971.223851ms] May 20 00:34:28.653: INFO: Created: latency-svc-7b6tj May 20 00:34:28.759: INFO: Got endpoints: latency-svc-7b6tj [1.047890691s] May 20 00:34:28.762: INFO: Created: latency-svc-czqhc May 20 00:34:28.774: INFO: Got endpoints: latency-svc-czqhc [964.006523ms] May 20 00:34:28.821: INFO: Created: latency-svc-wrkjg May 20 00:34:28.835: INFO: Got endpoints: latency-svc-wrkjg [983.990785ms] May 20 00:34:28.920: INFO: Created: latency-svc-z2j7v May 20 00:34:28.925: INFO: Got endpoints: latency-svc-z2j7v [1.030400877s] May 20 00:34:28.958: INFO: Created: latency-svc-x8j97 May 20 00:34:28.989: INFO: Got endpoints: latency-svc-x8j97 [1.008458991s] May 20 00:34:29.019: INFO: Created: latency-svc-c7mm6 May 20 00:34:29.064: INFO: Got endpoints: latency-svc-c7mm6 [1.059487162s] May 20 00:34:29.120: INFO: Created: latency-svc-ccwtc May 20 00:34:29.131: INFO: Got endpoints: latency-svc-ccwtc [1.026088365s] May 20 00:34:29.151: INFO: Created: latency-svc-2t76q May 20 00:34:29.162: INFO: Got endpoints: latency-svc-2t76q [1.03315212s] May 20 00:34:29.238: INFO: Created: latency-svc-4xw2l May 20 00:34:29.246: INFO: Got endpoints: latency-svc-4xw2l [1.087043699s] May 20 00:34:29.277: INFO: Created: latency-svc-nfsv9 May 20 00:34:29.300: INFO: Got endpoints: latency-svc-nfsv9 [1.018335176s] May 20 00:34:29.332: INFO: Created: latency-svc-lktxd May 20 00:34:29.399: INFO: Got endpoints: latency-svc-lktxd [1.08737353s] May 20 00:34:29.427: INFO: Created: latency-svc-j88xc May 20 00:34:29.439: INFO: Got endpoints: latency-svc-j88xc [1.020341122s] May 20 00:34:29.469: INFO: Created: latency-svc-xwrb7 May 20 00:34:29.482: INFO: Got endpoints: latency-svc-xwrb7 [1.02117717s] May 20 00:34:29.498: INFO: Created: latency-svc-5jjp2 May 20 00:34:29.559: INFO: Got endpoints: latency-svc-5jjp2 [998.178631ms] May 20 00:34:29.565: INFO: Created: latency-svc-gnsb2 May 20 00:34:29.582: INFO: Got endpoints: latency-svc-gnsb2 [953.961213ms] May 20 00:34:29.607: INFO: Created: latency-svc-mxbsb May 20 00:34:29.621: INFO: Got endpoints: latency-svc-mxbsb [862.365792ms] May 20 00:34:29.699: INFO: Created: latency-svc-w2l2h May 20 00:34:29.732: INFO: Created: latency-svc-slrvz May 20 00:34:29.732: INFO: Got endpoints: latency-svc-w2l2h [958.678939ms] May 20 00:34:29.756: INFO: Got endpoints: latency-svc-slrvz [920.936194ms] May 20 00:34:29.788: INFO: Created: latency-svc-p58r7 May 20 00:34:29.866: INFO: Got endpoints: latency-svc-p58r7 [941.08129ms] May 20 00:34:29.888: INFO: Created: latency-svc-4p8xj May 20 00:34:29.913: INFO: Got endpoints: latency-svc-4p8xj [923.858391ms] May 20 00:34:29.950: INFO: Created: latency-svc-9tmpx May 20 00:34:30.040: INFO: Got endpoints: latency-svc-9tmpx [975.870001ms] May 20 00:34:30.075: INFO: Created: latency-svc-dr44b May 20 00:34:30.109: INFO: Got endpoints: latency-svc-dr44b [977.879524ms] May 20 00:34:30.202: INFO: Created: latency-svc-hv9s2 May 20 00:34:30.224: INFO: Got endpoints: latency-svc-hv9s2 [1.06265111s] May 20 00:34:30.226: INFO: Created: latency-svc-6hkft May 20 00:34:30.248: INFO: Got endpoints: latency-svc-6hkft [1.002225463s] May 20 00:34:30.284: INFO: Created: latency-svc-fbrhk May 20 00:34:30.381: INFO: Got endpoints: latency-svc-fbrhk [1.081001671s] May 20 00:34:30.384: INFO: Created: latency-svc-9rvp4 May 20 00:34:30.403: INFO: Got endpoints: latency-svc-9rvp4 [1.003757469s] May 20 00:34:30.471: INFO: Created: latency-svc-kxfnl May 20 00:34:30.531: INFO: Got endpoints: latency-svc-kxfnl [1.092423576s] May 20 00:34:30.548: INFO: Created: latency-svc-mg2f4 May 20 00:34:30.563: INFO: Got endpoints: latency-svc-mg2f4 [1.081014463s] May 20 00:34:30.585: INFO: Created: latency-svc-wj8fz May 20 00:34:30.615: INFO: Got endpoints: latency-svc-wj8fz [1.056019516s] May 20 00:34:30.692: INFO: Created: latency-svc-mwznf May 20 00:34:30.717: INFO: Got endpoints: latency-svc-mwznf [1.134402253s] May 20 00:34:30.717: INFO: Created: latency-svc-9j79q May 20 00:34:30.741: INFO: Got endpoints: latency-svc-9j79q [1.119802302s] May 20 00:34:30.854: INFO: Created: latency-svc-fwjfw May 20 00:34:30.880: INFO: Created: latency-svc-zgfgk May 20 00:34:30.880: INFO: Got endpoints: latency-svc-fwjfw [1.148034703s] May 20 00:34:30.909: INFO: Got endpoints: latency-svc-zgfgk [1.152248169s] May 20 00:34:30.945: INFO: Created: latency-svc-t5k97 May 20 00:34:31.022: INFO: Got endpoints: latency-svc-t5k97 [1.15593372s] May 20 00:34:31.025: INFO: Created: latency-svc-tcgb7 May 20 00:34:31.033: INFO: Got endpoints: latency-svc-tcgb7 [1.11992406s] May 20 00:34:31.053: INFO: Created: latency-svc-ppwsp May 20 00:34:31.069: INFO: Got endpoints: latency-svc-ppwsp [1.029068266s] May 20 00:34:31.089: INFO: Created: latency-svc-6s2b4 May 20 00:34:31.099: INFO: Got endpoints: latency-svc-6s2b4 [989.662352ms] May 20 00:34:31.119: INFO: Created: latency-svc-z9j26 May 20 00:34:31.184: INFO: Got endpoints: latency-svc-z9j26 [959.223688ms] May 20 00:34:31.230: INFO: Created: latency-svc-x42xw May 20 00:34:31.264: INFO: Got endpoints: latency-svc-x42xw [1.015730642s] May 20 00:34:31.346: INFO: Created: latency-svc-mm976 May 20 00:34:31.371: INFO: Got endpoints: latency-svc-mm976 [989.128731ms] May 20 00:34:31.395: INFO: Created: latency-svc-qn96w May 20 00:34:31.407: INFO: Got endpoints: latency-svc-qn96w [1.004540012s] May 20 00:34:31.501: INFO: Created: latency-svc-xdvrt May 20 00:34:31.534: INFO: Got endpoints: latency-svc-xdvrt [1.003108553s] May 20 00:34:31.538: INFO: Created: latency-svc-5gz7z May 20 00:34:31.557: INFO: Got endpoints: latency-svc-5gz7z [994.419063ms] May 20 00:34:31.593: INFO: Created: latency-svc-bcgzb May 20 00:34:31.651: INFO: Got endpoints: latency-svc-bcgzb [1.036124381s] May 20 00:34:31.671: INFO: Created: latency-svc-fbvz9 May 20 00:34:31.684: INFO: Got endpoints: latency-svc-fbvz9 [967.171967ms] May 20 00:34:31.702: INFO: Created: latency-svc-6xnk4 May 20 00:34:31.739: INFO: Got endpoints: latency-svc-6xnk4 [997.748374ms] May 20 00:34:31.834: INFO: Created: latency-svc-dztb9 May 20 00:34:31.848: INFO: Got endpoints: latency-svc-dztb9 [967.064844ms] May 20 00:34:31.870: INFO: Created: latency-svc-jmlzl May 20 00:34:31.884: INFO: Got endpoints: latency-svc-jmlzl [975.02572ms] May 20 00:34:31.899: INFO: Created: latency-svc-ztv2n May 20 00:34:31.908: INFO: Got endpoints: latency-svc-ztv2n [885.990413ms] May 20 00:34:31.980: INFO: Created: latency-svc-ctzfl May 20 00:34:32.002: INFO: Got endpoints: latency-svc-ctzfl [968.721448ms] May 20 00:34:32.004: INFO: Created: latency-svc-pxpns May 20 00:34:32.025: INFO: Got endpoints: latency-svc-pxpns [955.896258ms] May 20 00:34:32.074: INFO: Created: latency-svc-dpf5z May 20 00:34:32.142: INFO: Got endpoints: latency-svc-dpf5z [1.042677477s] May 20 00:34:32.157: INFO: Created: latency-svc-x4vxc May 20 00:34:32.174: INFO: Got endpoints: latency-svc-x4vxc [990.26467ms] May 20 00:34:32.194: INFO: Created: latency-svc-p9q4t May 20 00:34:32.217: INFO: Got endpoints: latency-svc-p9q4t [953.120825ms] May 20 00:34:32.298: INFO: Created: latency-svc-qwh6j May 20 00:34:32.325: INFO: Created: latency-svc-mvzb4 May 20 00:34:32.326: INFO: Got endpoints: latency-svc-qwh6j [955.140904ms] May 20 00:34:32.350: INFO: Got endpoints: latency-svc-mvzb4 [942.298612ms] May 20 00:34:32.392: INFO: Created: latency-svc-xl625 May 20 00:34:32.465: INFO: Got endpoints: latency-svc-xl625 [930.661979ms] May 20 00:34:32.467: INFO: Created: latency-svc-m9hcq May 20 00:34:32.475: INFO: Got endpoints: latency-svc-m9hcq [917.216651ms] May 20 00:34:32.518: INFO: Created: latency-svc-zhr77 May 20 00:34:32.535: INFO: Got endpoints: latency-svc-zhr77 [884.458625ms] May 20 00:34:32.561: INFO: Created: latency-svc-rjzzf May 20 00:34:32.633: INFO: Got endpoints: latency-svc-rjzzf [949.451065ms] May 20 00:34:32.634: INFO: Created: latency-svc-hp5mp May 20 00:34:32.650: INFO: Got endpoints: latency-svc-hp5mp [910.788781ms] May 20 00:34:32.704: INFO: Created: latency-svc-dz48p May 20 00:34:32.720: INFO: Got endpoints: latency-svc-dz48p [871.984466ms] May 20 00:34:32.794: INFO: Created: latency-svc-kwh95 May 20 00:34:32.806: INFO: Got endpoints: latency-svc-kwh95 [922.422663ms] May 20 00:34:32.830: INFO: Created: latency-svc-tzd75 May 20 00:34:32.865: INFO: Got endpoints: latency-svc-tzd75 [957.173663ms] May 20 00:34:32.889: INFO: Created: latency-svc-6kmt7 May 20 00:34:32.932: INFO: Got endpoints: latency-svc-6kmt7 [929.776393ms] May 20 00:34:32.933: INFO: Created: latency-svc-pmlx7 May 20 00:34:32.946: INFO: Got endpoints: latency-svc-pmlx7 [920.314744ms] May 20 00:34:32.967: INFO: Created: latency-svc-4rwwt May 20 00:34:32.980: INFO: Got endpoints: latency-svc-4rwwt [838.372322ms] May 20 00:34:32.998: INFO: Created: latency-svc-jf92k May 20 00:34:33.022: INFO: Got endpoints: latency-svc-jf92k [847.723713ms] May 20 00:34:33.076: INFO: Created: latency-svc-d9kj7 May 20 00:34:33.083: INFO: Got endpoints: latency-svc-d9kj7 [865.458955ms] May 20 00:34:33.099: INFO: Created: latency-svc-ml49s May 20 00:34:33.114: INFO: Got endpoints: latency-svc-ml49s [787.923427ms] May 20 00:34:33.130: INFO: Created: latency-svc-dmtgb May 20 00:34:33.144: INFO: Got endpoints: latency-svc-dmtgb [793.839659ms] May 20 00:34:33.144: INFO: Latencies: [180.557725ms 192.744137ms 246.582339ms 318.989272ms 356.126391ms 386.758466ms 484.220202ms 501.021176ms 531.648723ms 572.315129ms 640.418061ms 688.251705ms 721.81346ms 736.507451ms 768.888454ms 787.923427ms 790.647887ms 790.970256ms 791.496275ms 793.839659ms 799.001043ms 802.786852ms 804.341137ms 813.321366ms 814.875266ms 818.86395ms 818.943227ms 823.712535ms 829.203848ms 832.446274ms 834.519839ms 835.11547ms 836.128478ms 838.372322ms 840.449558ms 845.92015ms 847.151084ms 847.723713ms 852.797194ms 858.999201ms 862.039932ms 862.358051ms 862.365792ms 862.445796ms 865.458955ms 865.64938ms 867.654546ms 868.774904ms 871.984466ms 872.006761ms 873.523889ms 874.925353ms 875.62019ms 880.716408ms 881.826151ms 884.458625ms 884.949025ms 885.990413ms 888.551221ms 890.768128ms 895.164345ms 895.690583ms 899.907377ms 901.280338ms 907.930935ms 909.070347ms 910.788781ms 913.709251ms 915.774571ms 916.327011ms 917.216651ms 920.314744ms 920.936194ms 921.73289ms 922.422663ms 923.858391ms 929.776393ms 930.507695ms 930.641142ms 930.661979ms 933.371459ms 934.509007ms 939.893652ms 940.428857ms 941.08129ms 942.298612ms 943.216721ms 943.400083ms 943.483495ms 946.49762ms 946.758928ms 948.094845ms 948.437498ms 949.334973ms 949.451065ms 951.197188ms 951.27668ms 953.120825ms 953.961213ms 954.736418ms 955.140904ms 955.843107ms 955.896258ms 957.173663ms 958.678939ms 959.223688ms 960.674059ms 961.047422ms 961.595084ms 963.689716ms 963.955682ms 964.006523ms 964.125722ms 965.452572ms 965.683559ms 966.247428ms 966.55305ms 966.615017ms 967.064844ms 967.171967ms 968.721448ms 970.124581ms 970.15266ms 971.223851ms 972.470066ms 974.568443ms 975.02572ms 975.784263ms 975.870001ms 976.674122ms 977.879524ms 981.215191ms 982.696748ms 983.990785ms 984.093819ms 985.024798ms 989.128731ms 989.662352ms 990.26467ms 990.343947ms 994.419063ms 997.343866ms 997.748374ms 998.178631ms 998.501339ms 1.002076608s 1.002225463s 1.003108553s 1.003757469s 1.004540012s 1.008458991s 1.010922041s 1.015730642s 1.018335176s 1.019748101s 1.020341122s 1.02117717s 1.026088365s 1.029068266s 1.030312124s 1.030400877s 1.03315212s 1.036124381s 1.036457099s 1.038971929s 1.042677477s 1.044550761s 1.047890691s 1.055200128s 1.056019516s 1.059487162s 1.061159632s 1.06265111s 1.071571393s 1.081001671s 1.081014463s 1.087043699s 1.08737353s 1.092423576s 1.119802302s 1.11992406s 1.134402253s 1.148034703s 1.152248169s 1.15593372s 1.238235145s 1.251280526s 1.253712544s 1.270166963s 1.273862926s 1.292970657s 1.296924185s 1.297321069s 1.297629539s 1.305332502s 1.319921774s 1.322694398s 1.348097803s 1.360846337s 1.363540364s] May 20 00:34:33.144: INFO: 50 %ile: 955.140904ms May 20 00:34:33.144: INFO: 90 %ile: 1.11992406s May 20 00:34:33.144: INFO: 99 %ile: 1.360846337s May 20 00:34:33.144: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:34:33.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-331" for this suite. • [SLOW TEST:17.223 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":288,"completed":183,"skipped":3070,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:34:33.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0520 00:34:34.408032 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 20 00:34:34.408: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:34:34.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4926" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":288,"completed":184,"skipped":3086,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:34:34.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 20 00:34:34.702: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1512' May 20 00:34:38.520: INFO: stderr: "" May 20 00:34:38.520: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 May 20 00:34:38.571: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1512' May 20 00:34:45.265: INFO: stderr: "" May 20 00:34:45.265: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:34:45.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1512" for this suite. • [SLOW TEST:10.932 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":288,"completed":185,"skipped":3109,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:34:45.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8560 May 20 00:34:49.550: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8560 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 20 00:34:49.810: INFO: stderr: "I0520 00:34:49.718193 2610 log.go:172] (0xc00003a4d0) (0xc0008a2280) Create stream\nI0520 00:34:49.718240 2610 log.go:172] (0xc00003a4d0) (0xc0008a2280) Stream added, broadcasting: 1\nI0520 00:34:49.719529 2610 log.go:172] (0xc00003a4d0) Reply frame received for 1\nI0520 00:34:49.719559 2610 log.go:172] (0xc00003a4d0) (0xc0008be820) Create stream\nI0520 00:34:49.719575 2610 log.go:172] (0xc00003a4d0) (0xc0008be820) Stream added, broadcasting: 3\nI0520 00:34:49.720318 2610 log.go:172] (0xc00003a4d0) Reply frame received for 3\nI0520 00:34:49.720354 2610 log.go:172] (0xc00003a4d0) (0xc000896460) Create stream\nI0520 00:34:49.720361 2610 log.go:172] (0xc00003a4d0) (0xc000896460) Stream added, broadcasting: 5\nI0520 00:34:49.720971 2610 log.go:172] (0xc00003a4d0) Reply frame received for 5\nI0520 00:34:49.801349 2610 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0520 00:34:49.801374 2610 log.go:172] (0xc000896460) (5) Data frame handling\nI0520 00:34:49.801386 2610 log.go:172] (0xc000896460) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0520 00:34:49.804163 2610 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0520 00:34:49.804189 2610 log.go:172] (0xc0008be820) (3) Data frame handling\nI0520 00:34:49.804207 2610 log.go:172] (0xc0008be820) (3) Data frame sent\nI0520 00:34:49.804756 2610 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0520 00:34:49.804766 2610 log.go:172] (0xc0008be820) (3) Data frame handling\nI0520 00:34:49.804852 2610 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0520 00:34:49.804874 2610 log.go:172] (0xc000896460) (5) Data frame handling\nI0520 00:34:49.806400 2610 log.go:172] (0xc00003a4d0) Data frame received for 1\nI0520 00:34:49.806453 2610 log.go:172] (0xc0008a2280) (1) Data frame handling\nI0520 00:34:49.806476 2610 log.go:172] (0xc0008a2280) (1) Data frame sent\nI0520 00:34:49.806490 2610 log.go:172] (0xc00003a4d0) (0xc0008a2280) Stream removed, broadcasting: 1\nI0520 00:34:49.806511 2610 log.go:172] (0xc00003a4d0) Go away received\nI0520 00:34:49.806793 2610 log.go:172] (0xc00003a4d0) (0xc0008a2280) Stream removed, broadcasting: 1\nI0520 00:34:49.806806 2610 log.go:172] (0xc00003a4d0) (0xc0008be820) Stream removed, broadcasting: 3\nI0520 00:34:49.806813 2610 log.go:172] (0xc00003a4d0) (0xc000896460) Stream removed, broadcasting: 5\n" May 20 00:34:49.810: INFO: stdout: "iptables" May 20 00:34:49.810: INFO: proxyMode: iptables May 20 00:34:49.867: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 20 00:34:49.911: INFO: Pod kube-proxy-mode-detector still exists May 20 00:34:51.911: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 20 00:34:51.914: INFO: Pod kube-proxy-mode-detector still exists May 20 00:34:53.911: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 20 00:34:53.944: INFO: Pod kube-proxy-mode-detector still exists May 20 00:34:55.911: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 20 00:34:55.927: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-8560 STEP: creating replication controller affinity-nodeport-timeout in namespace services-8560 I0520 00:34:56.104816 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-8560, replica count: 3 I0520 00:34:59.155187 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 00:35:02.155396 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 00:35:05.155619 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 00:35:05.214: INFO: Creating new exec pod May 20 00:35:10.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8560 execpod-affinityc7nvm -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 20 00:35:10.516: INFO: stderr: "I0520 00:35:10.414320 2625 log.go:172] (0xc000b733f0) (0xc000703720) Create stream\nI0520 00:35:10.414407 2625 log.go:172] (0xc000b733f0) (0xc000703720) Stream added, broadcasting: 1\nI0520 00:35:10.418174 2625 log.go:172] (0xc000b733f0) Reply frame received for 1\nI0520 00:35:10.418230 2625 log.go:172] (0xc000b733f0) (0xc000665180) Create stream\nI0520 00:35:10.418255 2625 log.go:172] (0xc000b733f0) (0xc000665180) Stream added, broadcasting: 3\nI0520 00:35:10.419698 2625 log.go:172] (0xc000b733f0) Reply frame received for 3\nI0520 00:35:10.419736 2625 log.go:172] (0xc000b733f0) (0xc000635ea0) Create stream\nI0520 00:35:10.419749 2625 log.go:172] (0xc000b733f0) (0xc000635ea0) Stream added, broadcasting: 5\nI0520 00:35:10.421056 2625 log.go:172] (0xc000b733f0) Reply frame received for 5\nI0520 00:35:10.509711 2625 log.go:172] (0xc000b733f0) Data frame received for 5\nI0520 00:35:10.509772 2625 log.go:172] (0xc000635ea0) (5) Data frame handling\nI0520 00:35:10.509817 2625 log.go:172] (0xc000635ea0) (5) Data frame sent\nI0520 00:35:10.509841 2625 log.go:172] (0xc000b733f0) Data frame received for 5\nI0520 00:35:10.509859 2625 log.go:172] (0xc000635ea0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0520 00:35:10.509901 2625 log.go:172] (0xc000635ea0) (5) Data frame sent\nI0520 00:35:10.509919 2625 log.go:172] (0xc000b733f0) Data frame received for 5\nI0520 00:35:10.509936 2625 log.go:172] (0xc000635ea0) (5) Data frame handling\nI0520 00:35:10.509988 2625 log.go:172] (0xc000b733f0) Data frame received for 3\nI0520 00:35:10.510027 2625 log.go:172] (0xc000665180) (3) Data frame handling\nI0520 00:35:10.511978 2625 log.go:172] (0xc000b733f0) Data frame received for 1\nI0520 00:35:10.512057 2625 log.go:172] (0xc000703720) (1) Data frame handling\nI0520 00:35:10.512089 2625 log.go:172] (0xc000703720) (1) Data frame sent\nI0520 00:35:10.512108 2625 log.go:172] (0xc000b733f0) (0xc000703720) Stream removed, broadcasting: 1\nI0520 00:35:10.512131 2625 log.go:172] (0xc000b733f0) Go away received\nI0520 00:35:10.512551 2625 log.go:172] (0xc000b733f0) (0xc000703720) Stream removed, broadcasting: 1\nI0520 00:35:10.512568 2625 log.go:172] (0xc000b733f0) (0xc000665180) Stream removed, broadcasting: 3\nI0520 00:35:10.512580 2625 log.go:172] (0xc000b733f0) (0xc000635ea0) Stream removed, broadcasting: 5\n" May 20 00:35:10.516: INFO: stdout: "" May 20 00:35:10.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8560 execpod-affinityc7nvm -- /bin/sh -x -c nc -zv -t -w 2 10.98.5.173 80' May 20 00:35:10.717: INFO: stderr: "I0520 00:35:10.644493 2645 log.go:172] (0xc0009853f0) (0xc000709ea0) Create stream\nI0520 00:35:10.644551 2645 log.go:172] (0xc0009853f0) (0xc000709ea0) Stream added, broadcasting: 1\nI0520 00:35:10.646893 2645 log.go:172] (0xc0009853f0) Reply frame received for 1\nI0520 00:35:10.646927 2645 log.go:172] (0xc0009853f0) (0xc000547680) Create stream\nI0520 00:35:10.646937 2645 log.go:172] (0xc0009853f0) (0xc000547680) Stream added, broadcasting: 3\nI0520 00:35:10.647922 2645 log.go:172] (0xc0009853f0) Reply frame received for 3\nI0520 00:35:10.647974 2645 log.go:172] (0xc0009853f0) (0xc000547720) Create stream\nI0520 00:35:10.647998 2645 log.go:172] (0xc0009853f0) (0xc000547720) Stream added, broadcasting: 5\nI0520 00:35:10.648922 2645 log.go:172] (0xc0009853f0) Reply frame received for 5\nI0520 00:35:10.710582 2645 log.go:172] (0xc0009853f0) Data frame received for 5\nI0520 00:35:10.710616 2645 log.go:172] (0xc000547720) (5) Data frame handling\nI0520 00:35:10.710638 2645 log.go:172] (0xc000547720) (5) Data frame sent\nI0520 00:35:10.710646 2645 log.go:172] (0xc0009853f0) Data frame received for 5\nI0520 00:35:10.710650 2645 log.go:172] (0xc000547720) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.5.173 80\nConnection to 10.98.5.173 80 port [tcp/http] succeeded!\nI0520 00:35:10.710669 2645 log.go:172] (0xc0009853f0) Data frame received for 3\nI0520 00:35:10.710677 2645 log.go:172] (0xc000547680) (3) Data frame handling\nI0520 00:35:10.712024 2645 log.go:172] (0xc0009853f0) Data frame received for 1\nI0520 00:35:10.712049 2645 log.go:172] (0xc000709ea0) (1) Data frame handling\nI0520 00:35:10.712073 2645 log.go:172] (0xc000709ea0) (1) Data frame sent\nI0520 00:35:10.712150 2645 log.go:172] (0xc0009853f0) (0xc000709ea0) Stream removed, broadcasting: 1\nI0520 00:35:10.712252 2645 log.go:172] (0xc0009853f0) Go away received\nI0520 00:35:10.712518 2645 log.go:172] (0xc0009853f0) (0xc000709ea0) Stream removed, broadcasting: 1\nI0520 00:35:10.712537 2645 log.go:172] (0xc0009853f0) (0xc000547680) Stream removed, broadcasting: 3\nI0520 00:35:10.712543 2645 log.go:172] (0xc0009853f0) (0xc000547720) Stream removed, broadcasting: 5\n" May 20 00:35:10.717: INFO: stdout: "" May 20 00:35:10.717: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8560 execpod-affinityc7nvm -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31270' May 20 00:35:10.927: INFO: stderr: "I0520 00:35:10.854689 2665 log.go:172] (0xc000aa18c0) (0xc0005f8c80) Create stream\nI0520 00:35:10.854749 2665 log.go:172] (0xc000aa18c0) (0xc0005f8c80) Stream added, broadcasting: 1\nI0520 00:35:10.865068 2665 log.go:172] (0xc000aa18c0) Reply frame received for 1\nI0520 00:35:10.865401 2665 log.go:172] (0xc000aa18c0) (0xc0005b8500) Create stream\nI0520 00:35:10.865516 2665 log.go:172] (0xc000aa18c0) (0xc0005b8500) Stream added, broadcasting: 3\nI0520 00:35:10.870332 2665 log.go:172] (0xc000aa18c0) Reply frame received for 3\nI0520 00:35:10.870381 2665 log.go:172] (0xc000aa18c0) (0xc0005401e0) Create stream\nI0520 00:35:10.870391 2665 log.go:172] (0xc000aa18c0) (0xc0005401e0) Stream added, broadcasting: 5\nI0520 00:35:10.871424 2665 log.go:172] (0xc000aa18c0) Reply frame received for 5\nI0520 00:35:10.920692 2665 log.go:172] (0xc000aa18c0) Data frame received for 3\nI0520 00:35:10.920729 2665 log.go:172] (0xc0005b8500) (3) Data frame handling\nI0520 00:35:10.920752 2665 log.go:172] (0xc000aa18c0) Data frame received for 5\nI0520 00:35:10.920764 2665 log.go:172] (0xc0005401e0) (5) Data frame handling\nI0520 00:35:10.920776 2665 log.go:172] (0xc0005401e0) (5) Data frame sent\nI0520 00:35:10.920804 2665 log.go:172] (0xc000aa18c0) Data frame received for 5\nI0520 00:35:10.920820 2665 log.go:172] (0xc0005401e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31270\nConnection to 172.17.0.13 31270 port [tcp/31270] succeeded!\nI0520 00:35:10.922305 2665 log.go:172] (0xc000aa18c0) Data frame received for 1\nI0520 00:35:10.922334 2665 log.go:172] (0xc0005f8c80) (1) Data frame handling\nI0520 00:35:10.922361 2665 log.go:172] (0xc0005f8c80) (1) Data frame sent\nI0520 00:35:10.922382 2665 log.go:172] (0xc000aa18c0) (0xc0005f8c80) Stream removed, broadcasting: 1\nI0520 00:35:10.922453 2665 log.go:172] (0xc000aa18c0) Go away received\nI0520 00:35:10.922802 2665 log.go:172] (0xc000aa18c0) (0xc0005f8c80) Stream removed, broadcasting: 1\nI0520 00:35:10.922839 2665 log.go:172] (0xc000aa18c0) (0xc0005b8500) Stream removed, broadcasting: 3\nI0520 00:35:10.922863 2665 log.go:172] (0xc000aa18c0) (0xc0005401e0) Stream removed, broadcasting: 5\n" May 20 00:35:10.928: INFO: stdout: "" May 20 00:35:10.928: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8560 execpod-affinityc7nvm -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31270' May 20 00:35:11.128: INFO: stderr: "I0520 00:35:11.063046 2686 log.go:172] (0xc00003b130) (0xc0002575e0) Create stream\nI0520 00:35:11.063097 2686 log.go:172] (0xc00003b130) (0xc0002575e0) Stream added, broadcasting: 1\nI0520 00:35:11.065781 2686 log.go:172] (0xc00003b130) Reply frame received for 1\nI0520 00:35:11.065839 2686 log.go:172] (0xc00003b130) (0xc000384640) Create stream\nI0520 00:35:11.065860 2686 log.go:172] (0xc00003b130) (0xc000384640) Stream added, broadcasting: 3\nI0520 00:35:11.066860 2686 log.go:172] (0xc00003b130) Reply frame received for 3\nI0520 00:35:11.066900 2686 log.go:172] (0xc00003b130) (0xc000257680) Create stream\nI0520 00:35:11.066912 2686 log.go:172] (0xc00003b130) (0xc000257680) Stream added, broadcasting: 5\nI0520 00:35:11.067897 2686 log.go:172] (0xc00003b130) Reply frame received for 5\nI0520 00:35:11.122249 2686 log.go:172] (0xc00003b130) Data frame received for 3\nI0520 00:35:11.122271 2686 log.go:172] (0xc000384640) (3) Data frame handling\nI0520 00:35:11.122293 2686 log.go:172] (0xc00003b130) Data frame received for 5\nI0520 00:35:11.122303 2686 log.go:172] (0xc000257680) (5) Data frame handling\nI0520 00:35:11.122310 2686 log.go:172] (0xc000257680) (5) Data frame sent\nI0520 00:35:11.122317 2686 log.go:172] (0xc00003b130) Data frame received for 5\nI0520 00:35:11.122321 2686 log.go:172] (0xc000257680) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31270\nConnection to 172.17.0.12 31270 port [tcp/31270] succeeded!\nI0520 00:35:11.123724 2686 log.go:172] (0xc00003b130) Data frame received for 1\nI0520 00:35:11.123737 2686 log.go:172] (0xc0002575e0) (1) Data frame handling\nI0520 00:35:11.123744 2686 log.go:172] (0xc0002575e0) (1) Data frame sent\nI0520 00:35:11.123751 2686 log.go:172] (0xc00003b130) (0xc0002575e0) Stream removed, broadcasting: 1\nI0520 00:35:11.123760 2686 log.go:172] (0xc00003b130) Go away received\nI0520 00:35:11.124020 2686 log.go:172] (0xc00003b130) (0xc0002575e0) Stream removed, broadcasting: 1\nI0520 00:35:11.124032 2686 log.go:172] (0xc00003b130) (0xc000384640) Stream removed, broadcasting: 3\nI0520 00:35:11.124037 2686 log.go:172] (0xc00003b130) (0xc000257680) Stream removed, broadcasting: 5\n" May 20 00:35:11.128: INFO: stdout: "" May 20 00:35:11.128: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8560 execpod-affinityc7nvm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31270/ ; done' May 20 00:35:11.424: INFO: stderr: "I0520 00:35:11.257046 2709 log.go:172] (0xc00003a210) (0xc000516d20) Create stream\nI0520 00:35:11.257359 2709 log.go:172] (0xc00003a210) (0xc000516d20) Stream added, broadcasting: 1\nI0520 00:35:11.264338 2709 log.go:172] (0xc00003a210) Reply frame received for 1\nI0520 00:35:11.264401 2709 log.go:172] (0xc00003a210) (0xc00067c1e0) Create stream\nI0520 00:35:11.264436 2709 log.go:172] (0xc00003a210) (0xc00067c1e0) Stream added, broadcasting: 3\nI0520 00:35:11.269013 2709 log.go:172] (0xc00003a210) Reply frame received for 3\nI0520 00:35:11.269040 2709 log.go:172] (0xc00003a210) (0xc0005172c0) Create stream\nI0520 00:35:11.269046 2709 log.go:172] (0xc00003a210) (0xc0005172c0) Stream added, broadcasting: 5\nI0520 00:35:11.270123 2709 log.go:172] (0xc00003a210) Reply frame received for 5\nI0520 00:35:11.319429 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.319483 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.319503 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.319534 2709 log.go:172] (0xc00003a210) Data frame received for 5\nI0520 00:35:11.319549 2709 log.go:172] (0xc0005172c0) (5) Data frame handling\nI0520 00:35:11.319566 2709 log.go:172] (0xc0005172c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31270/\nI0520 00:35:11.326205 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.326221 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.326232 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.326653 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.326667 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.326675 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.326703 2709 log.go:172] (0xc00003a210) Data frame received for 5\nI0520 00:35:11.326713 2709 log.go:172] (0xc0005172c0) (5) Data frame handling\nI0520 00:35:11.326721 2709 log.go:172] (0xc0005172c0) (5) Data frame sent\nI0520 00:35:11.326730 2709 log.go:172] (0xc00003a210) Data frame received for 5\nI0520 00:35:11.326736 2709 log.go:172] (0xc0005172c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31270/\nI0520 00:35:11.326747 2709 log.go:172] (0xc0005172c0) (5) Data frame sent\nI0520 00:35:11.332408 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.332431 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.332446 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.332814 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.332885 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.332901 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.332915 2709 log.go:172] (0xc00003a210) Data frame received for 5\nI0520 00:35:11.332926 2709 log.go:172] (0xc0005172c0) (5) Data frame handling\nI0520 00:35:11.332958 2709 log.go:172] (0xc0005172c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31270/\nI0520 00:35:11.337533 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.337547 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.337559 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.337975 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.337998 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.338011 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.338024 2709 log.go:172] (0xc00003a210) Data frame received for 5\nI0520 00:35:11.338033 2709 log.go:172] (0xc0005172c0) (5) Data frame handling\nI0520 00:35:11.338047 2709 log.go:172] (0xc0005172c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31270/\nI0520 00:35:11.343048 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.343066 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.343077 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.343662 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.343684 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.343693 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.343713 2709 log.go:172] (0xc00003a210) Data frame received for 5\nI0520 00:35:11.343729 2709 log.go:172] (0xc0005172c0) (5) Data frame handling\nI0520 00:35:11.343747 2709 log.go:172] (0xc0005172c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31270/\nI0520 00:35:11.348746 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.348759 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.348778 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.349469 2709 log.go:172] (0xc00003a210) Data frame received for 5\nI0520 00:35:11.349491 2709 log.go:172] (0xc0005172c0) (5) Data frame handling\nI0520 00:35:11.349510 2709 log.go:172] (0xc0005172c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31270/\nI0520 00:35:11.349531 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.349544 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.349566 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.353314 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.353339 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.353362 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.354150 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.354178 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.354188 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.354199 2709 log.go:172] (0xc00003a210) Data frame received for 5\nI0520 00:35:11.354205 2709 log.go:172] (0xc0005172c0) (5) Data frame handling\nI0520 00:35:11.354215 2709 log.go:172] (0xc0005172c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31270/\nI0520 00:35:11.358939 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.358958 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.358982 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.359379 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.359400 2709 log.go:172] (0xc00003a210) Data frame received for 5\nI0520 00:35:11.359426 2709 log.go:172] (0xc0005172c0) (5) Data frame handling\nI0520 00:35:11.359446 2709 log.go:172] (0xc0005172c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31270/\nI0520 00:35:11.359461 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.359467 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.365557 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.365577 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.365599 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.366224 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.366267 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.366290 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.366315 2709 log.go:172] (0xc00003a210) Data frame received for 5\nI0520 00:35:11.366332 2709 log.go:172] (0xc0005172c0) (5) Data frame handling\nI0520 00:35:11.366358 2709 log.go:172] (0xc0005172c0) (5) Data frame sent\nI0520 00:35:11.366375 2709 log.go:172] (0xc00003a210) Data frame received for 5\nI0520 00:35:11.366392 2709 log.go:172] (0xc0005172c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31270/\nI0520 00:35:11.366420 2709 log.go:172] (0xc0005172c0) (5) Data frame sent\nI0520 00:35:11.372651 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.372681 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.372698 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.372711 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.372723 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.372747 2709 log.go:172] (0xc00003a210) Data frame received for 5\nI0520 00:35:11.372768 2709 log.go:172] (0xc0005172c0) (5) Data frame handling\nI0520 00:35:11.372785 2709 log.go:172] (0xc0005172c0) (5) Data frame sent\nI0520 00:35:11.372802 2709 log.go:172] (0xc00003a210) Data frame received for 5\nI0520 00:35:11.372815 2709 log.go:172] (0xc0005172c0) (5) Data frame handling\nI0520 00:35:11.372829 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31270/\nI0520 00:35:11.372860 2709 log.go:172] (0xc0005172c0) (5) Data frame sent\nI0520 00:35:11.378131 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.378163 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.378195 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.378527 2709 log.go:172] (0xc00003a210) Data frame received for 5\nI0520 00:35:11.378546 2709 log.go:172] (0xc0005172c0) (5) Data frame handling\nI0520 00:35:11.378557 2709 log.go:172] (0xc0005172c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31270/\nI0520 00:35:11.378584 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.378616 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.378640 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.385103 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.385334 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.385361 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.385899 2709 log.go:172] (0xc00003a210) Data frame received for 5\nI0520 00:35:11.385915 2709 log.go:172] (0xc0005172c0) (5) Data frame handling\nI0520 00:35:11.385928 2709 log.go:172] (0xc0005172c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31270/\nI0520 00:35:11.385953 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.385969 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.385987 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.391454 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.391468 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.391477 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.392569 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.392699 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.392838 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.392978 2709 log.go:172] (0xc00003a210) Data frame received for 5\nI0520 00:35:11.393087 2709 log.go:172] (0xc0005172c0) (5) Data frame handling\nI0520 00:35:11.393396 2709 log.go:172] (0xc0005172c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31270/\nI0520 00:35:11.400311 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.400327 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.400337 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.401333 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.401355 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.401374 2709 log.go:172] (0xc00003a210) Data frame received for 5\nI0520 00:35:11.401397 2709 log.go:172] (0xc0005172c0) (5) Data frame handling\nI0520 00:35:11.401418 2709 log.go:172] (0xc0005172c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31270/\nI0520 00:35:11.401433 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.406015 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.406038 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.406054 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.406674 2709 log.go:172] (0xc00003a210) Data frame received for 5\nI0520 00:35:11.406690 2709 log.go:172] (0xc0005172c0) (5) Data frame handling\nI0520 00:35:11.406697 2709 log.go:172] (0xc0005172c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31270/\nI0520 00:35:11.406707 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.406712 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.406717 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.411453 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.411468 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.411485 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.411937 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.411966 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.411985 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.412008 2709 log.go:172] (0xc00003a210) Data frame received for 5\nI0520 00:35:11.412019 2709 log.go:172] (0xc0005172c0) (5) Data frame handling\nI0520 00:35:11.412027 2709 log.go:172] (0xc0005172c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31270/\nI0520 00:35:11.416672 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.416696 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.416715 2709 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0520 00:35:11.416742 2709 log.go:172] (0xc00003a210) Data frame received for 5\nI0520 00:35:11.416760 2709 log.go:172] (0xc0005172c0) (5) Data frame handling\nI0520 00:35:11.417614 2709 log.go:172] (0xc00003a210) Data frame received for 3\nI0520 00:35:11.417633 2709 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0520 00:35:11.418984 2709 log.go:172] (0xc00003a210) Data frame received for 1\nI0520 00:35:11.419018 2709 log.go:172] (0xc000516d20) (1) Data frame handling\nI0520 00:35:11.419039 2709 log.go:172] (0xc000516d20) (1) Data frame sent\nI0520 00:35:11.419408 2709 log.go:172] (0xc00003a210) (0xc000516d20) Stream removed, broadcasting: 1\nI0520 00:35:11.419451 2709 log.go:172] (0xc00003a210) Go away received\nI0520 00:35:11.419728 2709 log.go:172] (0xc00003a210) (0xc000516d20) Stream removed, broadcasting: 1\nI0520 00:35:11.419746 2709 log.go:172] (0xc00003a210) (0xc00067c1e0) Stream removed, broadcasting: 3\nI0520 00:35:11.419753 2709 log.go:172] (0xc00003a210) (0xc0005172c0) Stream removed, broadcasting: 5\n" May 20 00:35:11.425: INFO: stdout: "\naffinity-nodeport-timeout-26fc5\naffinity-nodeport-timeout-26fc5\naffinity-nodeport-timeout-26fc5\naffinity-nodeport-timeout-26fc5\naffinity-nodeport-timeout-26fc5\naffinity-nodeport-timeout-26fc5\naffinity-nodeport-timeout-26fc5\naffinity-nodeport-timeout-26fc5\naffinity-nodeport-timeout-26fc5\naffinity-nodeport-timeout-26fc5\naffinity-nodeport-timeout-26fc5\naffinity-nodeport-timeout-26fc5\naffinity-nodeport-timeout-26fc5\naffinity-nodeport-timeout-26fc5\naffinity-nodeport-timeout-26fc5\naffinity-nodeport-timeout-26fc5" May 20 00:35:11.425: INFO: Received response from host: May 20 00:35:11.425: INFO: Received response from host: affinity-nodeport-timeout-26fc5 May 20 00:35:11.425: INFO: Received response from host: affinity-nodeport-timeout-26fc5 May 20 00:35:11.425: INFO: Received response from host: affinity-nodeport-timeout-26fc5 May 20 00:35:11.425: INFO: Received response from host: affinity-nodeport-timeout-26fc5 May 20 00:35:11.425: INFO: Received response from host: affinity-nodeport-timeout-26fc5 May 20 00:35:11.425: INFO: Received response from host: affinity-nodeport-timeout-26fc5 May 20 00:35:11.425: INFO: Received response from host: affinity-nodeport-timeout-26fc5 May 20 00:35:11.425: INFO: Received response from host: affinity-nodeport-timeout-26fc5 May 20 00:35:11.425: INFO: Received response from host: affinity-nodeport-timeout-26fc5 May 20 00:35:11.425: INFO: Received response from host: affinity-nodeport-timeout-26fc5 May 20 00:35:11.425: INFO: Received response from host: affinity-nodeport-timeout-26fc5 May 20 00:35:11.425: INFO: Received response from host: affinity-nodeport-timeout-26fc5 May 20 00:35:11.425: INFO: Received response from host: affinity-nodeport-timeout-26fc5 May 20 00:35:11.425: INFO: Received response from host: affinity-nodeport-timeout-26fc5 May 20 00:35:11.425: INFO: Received response from host: affinity-nodeport-timeout-26fc5 May 20 00:35:11.425: INFO: Received response from host: affinity-nodeport-timeout-26fc5 May 20 00:35:11.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8560 execpod-affinityc7nvm -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:31270/' May 20 00:35:11.636: INFO: stderr: "I0520 00:35:11.560747 2728 log.go:172] (0xc000a05a20) (0xc000800f00) Create stream\nI0520 00:35:11.560801 2728 log.go:172] (0xc000a05a20) (0xc000800f00) Stream added, broadcasting: 1\nI0520 00:35:11.562782 2728 log.go:172] (0xc000a05a20) Reply frame received for 1\nI0520 00:35:11.562834 2728 log.go:172] (0xc000a05a20) (0xc00080c5a0) Create stream\nI0520 00:35:11.562846 2728 log.go:172] (0xc000a05a20) (0xc00080c5a0) Stream added, broadcasting: 3\nI0520 00:35:11.563624 2728 log.go:172] (0xc000a05a20) Reply frame received for 3\nI0520 00:35:11.563658 2728 log.go:172] (0xc000a05a20) (0xc0008014a0) Create stream\nI0520 00:35:11.563670 2728 log.go:172] (0xc000a05a20) (0xc0008014a0) Stream added, broadcasting: 5\nI0520 00:35:11.564275 2728 log.go:172] (0xc000a05a20) Reply frame received for 5\nI0520 00:35:11.625539 2728 log.go:172] (0xc000a05a20) Data frame received for 5\nI0520 00:35:11.625561 2728 log.go:172] (0xc0008014a0) (5) Data frame handling\nI0520 00:35:11.625573 2728 log.go:172] (0xc0008014a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31270/\nI0520 00:35:11.627628 2728 log.go:172] (0xc000a05a20) Data frame received for 3\nI0520 00:35:11.627648 2728 log.go:172] (0xc00080c5a0) (3) Data frame handling\nI0520 00:35:11.627662 2728 log.go:172] (0xc00080c5a0) (3) Data frame sent\nI0520 00:35:11.628295 2728 log.go:172] (0xc000a05a20) Data frame received for 3\nI0520 00:35:11.628329 2728 log.go:172] (0xc00080c5a0) (3) Data frame handling\nI0520 00:35:11.628344 2728 log.go:172] (0xc000a05a20) Data frame received for 5\nI0520 00:35:11.628354 2728 log.go:172] (0xc0008014a0) (5) Data frame handling\nI0520 00:35:11.629626 2728 log.go:172] (0xc000a05a20) Data frame received for 1\nI0520 00:35:11.629655 2728 log.go:172] (0xc000800f00) (1) Data frame handling\nI0520 00:35:11.629667 2728 log.go:172] (0xc000800f00) (1) Data frame sent\nI0520 00:35:11.629677 2728 log.go:172] (0xc000a05a20) (0xc000800f00) Stream removed, broadcasting: 1\nI0520 00:35:11.629687 2728 log.go:172] (0xc000a05a20) Go away received\nI0520 00:35:11.630108 2728 log.go:172] (0xc000a05a20) (0xc000800f00) Stream removed, broadcasting: 1\nI0520 00:35:11.630143 2728 log.go:172] (0xc000a05a20) (0xc00080c5a0) Stream removed, broadcasting: 3\nI0520 00:35:11.630156 2728 log.go:172] (0xc000a05a20) (0xc0008014a0) Stream removed, broadcasting: 5\n" May 20 00:35:11.636: INFO: stdout: "affinity-nodeport-timeout-26fc5" May 20 00:35:26.636: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8560 execpod-affinityc7nvm -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:31270/' May 20 00:35:26.868: INFO: stderr: "I0520 00:35:26.774286 2746 log.go:172] (0xc00098e8f0) (0xc000a5a320) Create stream\nI0520 00:35:26.774340 2746 log.go:172] (0xc00098e8f0) (0xc000a5a320) Stream added, broadcasting: 1\nI0520 00:35:26.777723 2746 log.go:172] (0xc00098e8f0) Reply frame received for 1\nI0520 00:35:26.777750 2746 log.go:172] (0xc00098e8f0) (0xc0006f2fa0) Create stream\nI0520 00:35:26.777763 2746 log.go:172] (0xc00098e8f0) (0xc0006f2fa0) Stream added, broadcasting: 3\nI0520 00:35:26.778401 2746 log.go:172] (0xc00098e8f0) Reply frame received for 3\nI0520 00:35:26.778430 2746 log.go:172] (0xc00098e8f0) (0xc0006dcb40) Create stream\nI0520 00:35:26.778446 2746 log.go:172] (0xc00098e8f0) (0xc0006dcb40) Stream added, broadcasting: 5\nI0520 00:35:26.778959 2746 log.go:172] (0xc00098e8f0) Reply frame received for 5\nI0520 00:35:26.855322 2746 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0520 00:35:26.855354 2746 log.go:172] (0xc0006dcb40) (5) Data frame handling\nI0520 00:35:26.855379 2746 log.go:172] (0xc0006dcb40) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31270/\nI0520 00:35:26.858583 2746 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0520 00:35:26.858602 2746 log.go:172] (0xc0006f2fa0) (3) Data frame handling\nI0520 00:35:26.858619 2746 log.go:172] (0xc0006f2fa0) (3) Data frame sent\nI0520 00:35:26.859039 2746 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0520 00:35:26.859066 2746 log.go:172] (0xc0006f2fa0) (3) Data frame handling\nI0520 00:35:26.859101 2746 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0520 00:35:26.859123 2746 log.go:172] (0xc0006dcb40) (5) Data frame handling\nI0520 00:35:26.861041 2746 log.go:172] (0xc00098e8f0) Data frame received for 1\nI0520 00:35:26.861060 2746 log.go:172] (0xc000a5a320) (1) Data frame handling\nI0520 00:35:26.861074 2746 log.go:172] (0xc000a5a320) (1) Data frame sent\nI0520 00:35:26.861087 2746 log.go:172] (0xc00098e8f0) (0xc000a5a320) Stream removed, broadcasting: 1\nI0520 00:35:26.861564 2746 log.go:172] (0xc00098e8f0) Go away received\nI0520 00:35:26.861608 2746 log.go:172] (0xc00098e8f0) (0xc000a5a320) Stream removed, broadcasting: 1\nI0520 00:35:26.861635 2746 log.go:172] (0xc00098e8f0) (0xc0006f2fa0) Stream removed, broadcasting: 3\nI0520 00:35:26.861664 2746 log.go:172] (0xc00098e8f0) (0xc0006dcb40) Stream removed, broadcasting: 5\n" May 20 00:35:26.868: INFO: stdout: "affinity-nodeport-timeout-mrxcg" May 20 00:35:26.868: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-8560, will wait for the garbage collector to delete the pods May 20 00:35:26.975: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 7.340808ms May 20 00:35:27.375: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 400.242798ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:35:35.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8560" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:50.106 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":186,"skipped":3124,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:35:35.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80 May 20 00:35:35.550: INFO: Waiting up to 1m0s for all nodes to be ready May 20 00:36:35.575: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:36:35.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 20 00:36:39.833: INFO: found a healthy node: latest-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:37:02.192: INFO: pods created so far: [1 1 1] May 20 00:37:02.192: INFO: length of pods created so far: 3 May 20 00:37:16.202: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:37:23.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-8303" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:37:23.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7906" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74 • [SLOW TEST:107.916 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":288,"completed":187,"skipped":3131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:37:23.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7978 STEP: creating a selector STEP: Creating the service pods in kubernetes May 20 00:37:23.440: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 20 00:37:23.510: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 00:37:25.743: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 00:37:27.514: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:37:29.521: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:37:31.513: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:37:33.514: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:37:35.514: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:37:37.514: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:37:39.514: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:37:41.514: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:37:43.514: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:37:45.514: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 00:37:47.514: INFO: The status of Pod netserver-0 is Running (Ready = true) May 20 00:37:47.519: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 20 00:37:51.552: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.200:8080/dial?request=hostname&protocol=http&host=10.244.1.199&port=8080&tries=1'] Namespace:pod-network-test-7978 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 00:37:51.552: INFO: >>> kubeConfig: /root/.kube/config I0520 00:37:51.590505 7 log.go:172] (0xc002fe8370) (0xc0011a8be0) Create stream I0520 00:37:51.590539 7 log.go:172] (0xc002fe8370) (0xc0011a8be0) Stream added, broadcasting: 1 I0520 00:37:51.592829 7 log.go:172] (0xc002fe8370) Reply frame received for 1 I0520 00:37:51.592879 7 log.go:172] (0xc002fe8370) (0xc00262a000) Create stream I0520 00:37:51.592894 7 log.go:172] (0xc002fe8370) (0xc00262a000) Stream added, broadcasting: 3 I0520 00:37:51.593945 7 log.go:172] (0xc002fe8370) Reply frame received for 3 I0520 00:37:51.593982 7 log.go:172] (0xc002fe8370) (0xc0011a8d20) Create stream I0520 00:37:51.593995 7 log.go:172] (0xc002fe8370) (0xc0011a8d20) Stream added, broadcasting: 5 I0520 00:37:51.595017 7 log.go:172] (0xc002fe8370) Reply frame received for 5 I0520 00:37:51.711507 7 log.go:172] (0xc002fe8370) Data frame received for 3 I0520 00:37:51.711534 7 log.go:172] (0xc00262a000) (3) Data frame handling I0520 00:37:51.711548 7 log.go:172] (0xc00262a000) (3) Data frame sent I0520 00:37:51.712030 7 log.go:172] (0xc002fe8370) Data frame received for 3 I0520 00:37:51.712045 7 log.go:172] (0xc00262a000) (3) Data frame handling I0520 00:37:51.712183 7 log.go:172] (0xc002fe8370) Data frame received for 5 I0520 00:37:51.712199 7 log.go:172] (0xc0011a8d20) (5) Data frame handling I0520 00:37:51.714207 7 log.go:172] (0xc002fe8370) Data frame received for 1 I0520 00:37:51.714222 7 log.go:172] (0xc0011a8be0) (1) Data frame handling I0520 00:37:51.714231 7 log.go:172] (0xc0011a8be0) (1) Data frame sent I0520 00:37:51.714246 7 log.go:172] (0xc002fe8370) (0xc0011a8be0) Stream removed, broadcasting: 1 I0520 00:37:51.714259 7 log.go:172] (0xc002fe8370) Go away received I0520 00:37:51.714402 7 log.go:172] (0xc002fe8370) (0xc0011a8be0) Stream removed, broadcasting: 1 I0520 00:37:51.714425 7 log.go:172] (0xc002fe8370) (0xc00262a000) Stream removed, broadcasting: 3 I0520 00:37:51.714438 7 log.go:172] (0xc002fe8370) (0xc0011a8d20) Stream removed, broadcasting: 5 May 20 00:37:51.714: INFO: Waiting for responses: map[] May 20 00:37:51.717: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.200:8080/dial?request=hostname&protocol=http&host=10.244.2.208&port=8080&tries=1'] Namespace:pod-network-test-7978 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 00:37:51.717: INFO: >>> kubeConfig: /root/.kube/config I0520 00:37:51.744446 7 log.go:172] (0xc000eba420) (0xc0010fb400) Create stream I0520 00:37:51.744477 7 log.go:172] (0xc000eba420) (0xc0010fb400) Stream added, broadcasting: 1 I0520 00:37:51.746434 7 log.go:172] (0xc000eba420) Reply frame received for 1 I0520 00:37:51.746481 7 log.go:172] (0xc000eba420) (0xc0010fb720) Create stream I0520 00:37:51.746496 7 log.go:172] (0xc000eba420) (0xc0010fb720) Stream added, broadcasting: 3 I0520 00:37:51.747595 7 log.go:172] (0xc000eba420) Reply frame received for 3 I0520 00:37:51.747636 7 log.go:172] (0xc000eba420) (0xc0011a8dc0) Create stream I0520 00:37:51.747650 7 log.go:172] (0xc000eba420) (0xc0011a8dc0) Stream added, broadcasting: 5 I0520 00:37:51.748572 7 log.go:172] (0xc000eba420) Reply frame received for 5 I0520 00:37:51.827032 7 log.go:172] (0xc000eba420) Data frame received for 3 I0520 00:37:51.827064 7 log.go:172] (0xc0010fb720) (3) Data frame handling I0520 00:37:51.827082 7 log.go:172] (0xc0010fb720) (3) Data frame sent I0520 00:37:51.827275 7 log.go:172] (0xc000eba420) Data frame received for 5 I0520 00:37:51.827302 7 log.go:172] (0xc0011a8dc0) (5) Data frame handling I0520 00:37:51.827341 7 log.go:172] (0xc000eba420) Data frame received for 3 I0520 00:37:51.827361 7 log.go:172] (0xc0010fb720) (3) Data frame handling I0520 00:37:51.828567 7 log.go:172] (0xc000eba420) Data frame received for 1 I0520 00:37:51.828595 7 log.go:172] (0xc0010fb400) (1) Data frame handling I0520 00:37:51.828605 7 log.go:172] (0xc0010fb400) (1) Data frame sent I0520 00:37:51.828625 7 log.go:172] (0xc000eba420) (0xc0010fb400) Stream removed, broadcasting: 1 I0520 00:37:51.828638 7 log.go:172] (0xc000eba420) Go away received I0520 00:37:51.828820 7 log.go:172] (0xc000eba420) (0xc0010fb400) Stream removed, broadcasting: 1 I0520 00:37:51.828843 7 log.go:172] (0xc000eba420) (0xc0010fb720) Stream removed, broadcasting: 3 I0520 00:37:51.828858 7 log.go:172] (0xc000eba420) (0xc0011a8dc0) Stream removed, broadcasting: 5 May 20 00:37:51.828: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:37:51.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7978" for this suite. • [SLOW TEST:28.468 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":288,"completed":188,"skipped":3162,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:37:51.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 00:37:52.431: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 00:37:54.443: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725531872, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725531872, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725531872, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725531872, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 00:37:57.490: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:37:59.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6126" for this suite. STEP: Destroying namespace "webhook-6126-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.737 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":288,"completed":189,"skipped":3173,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:37:59.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 20 00:37:59.719: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:37:59.729: INFO: Number of nodes with available pods: 0 May 20 00:37:59.729: INFO: Node latest-worker is running more than one daemon pod May 20 00:38:00.734: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:38:00.736: INFO: Number of nodes with available pods: 0 May 20 00:38:00.736: INFO: Node latest-worker is running more than one daemon pod May 20 00:38:01.737: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:38:01.740: INFO: Number of nodes with available pods: 0 May 20 00:38:01.740: INFO: Node latest-worker is running more than one daemon pod May 20 00:38:02.734: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:38:02.738: INFO: Number of nodes with available pods: 0 May 20 00:38:02.738: INFO: Node latest-worker is running more than one daemon pod May 20 00:38:03.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:38:03.753: INFO: Number of nodes with available pods: 1 May 20 00:38:03.753: INFO: Node latest-worker2 is running more than one daemon pod May 20 00:38:04.744: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:38:04.784: INFO: Number of nodes with available pods: 2 May 20 00:38:04.784: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 20 00:38:04.822: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 00:38:04.837: INFO: Number of nodes with available pods: 2 May 20 00:38:04.838: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6521, will wait for the garbage collector to delete the pods May 20 00:38:06.094: INFO: Deleting DaemonSet.extensions daemon-set took: 6.044715ms May 20 00:38:06.694: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.265218ms May 20 00:38:15.297: INFO: Number of nodes with available pods: 0 May 20 00:38:15.297: INFO: Number of running nodes: 0, number of available pods: 0 May 20 00:38:15.300: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6521/daemonsets","resourceVersion":"6094045"},"items":null} May 20 00:38:15.302: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6521/pods","resourceVersion":"6094045"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:38:15.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6521" for this suite. • [SLOW TEST:15.746 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":288,"completed":190,"skipped":3181,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:38:15.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 00:38:16.084: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 00:38:18.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725531896, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725531896, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725531896, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725531895, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 00:38:21.700: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:38:21.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7174" for this suite. STEP: Destroying namespace "webhook-7174-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.774 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":288,"completed":191,"skipped":3203,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:38:22.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions May 20 00:38:22.246: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' May 20 00:38:22.694: INFO: stderr: "" May 20 00:38:22.694: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:38:22.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3315" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":288,"completed":192,"skipped":3204,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:38:22.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:38:56.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6884" for this suite. • [SLOW TEST:33.367 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":288,"completed":193,"skipped":3246,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:38:56.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-777 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 20 00:38:56.248: INFO: Found 0 stateful pods, waiting for 3 May 20 00:39:06.253: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 20 00:39:06.253: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 20 00:39:06.253: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 20 00:39:16.254: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 20 00:39:16.254: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 20 00:39:16.254: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 20 00:39:16.283: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 20 00:39:26.345: INFO: Updating stateful set ss2 May 20 00:39:26.397: INFO: Waiting for Pod statefulset-777/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 20 00:39:37.436: INFO: Found 2 stateful pods, waiting for 3 May 20 00:39:47.442: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 20 00:39:47.442: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 20 00:39:47.442: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 20 00:39:47.464: INFO: Updating stateful set ss2 May 20 00:39:47.513: INFO: Waiting for Pod statefulset-777/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 20 00:39:57.534: INFO: Updating stateful set ss2 May 20 00:39:57.718: INFO: Waiting for StatefulSet statefulset-777/ss2 to complete update May 20 00:39:57.718: INFO: Waiting for Pod statefulset-777/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 20 00:40:07.727: INFO: Waiting for StatefulSet statefulset-777/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 20 00:40:17.726: INFO: Deleting all statefulset in ns statefulset-777 May 20 00:40:17.730: INFO: Scaling statefulset ss2 to 0 May 20 00:40:37.744: INFO: Waiting for statefulset status.replicas updated to 0 May 20 00:40:37.747: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:40:37.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-777" for this suite. • [SLOW TEST:101.696 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":288,"completed":194,"skipped":3246,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:40:37.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-4b8f2d31-5a30-4a07-8261-c5b603cde9d2 in namespace container-probe-5765 May 20 00:40:41.874: INFO: Started pod liveness-4b8f2d31-5a30-4a07-8261-c5b603cde9d2 in namespace container-probe-5765 STEP: checking the pod's current state and verifying that restartCount is present May 20 00:40:41.877: INFO: Initial restart count of pod liveness-4b8f2d31-5a30-4a07-8261-c5b603cde9d2 is 0 May 20 00:40:57.928: INFO: Restart count of pod container-probe-5765/liveness-4b8f2d31-5a30-4a07-8261-c5b603cde9d2 is now 1 (16.051473047s elapsed) May 20 00:41:18.024: INFO: Restart count of pod container-probe-5765/liveness-4b8f2d31-5a30-4a07-8261-c5b603cde9d2 is now 2 (36.14669347s elapsed) May 20 00:41:38.071: INFO: Restart count of pod container-probe-5765/liveness-4b8f2d31-5a30-4a07-8261-c5b603cde9d2 is now 3 (56.194077154s elapsed) May 20 00:41:58.118: INFO: Restart count of pod container-probe-5765/liveness-4b8f2d31-5a30-4a07-8261-c5b603cde9d2 is now 4 (1m16.241374012s elapsed) May 20 00:43:10.348: INFO: Restart count of pod container-probe-5765/liveness-4b8f2d31-5a30-4a07-8261-c5b603cde9d2 is now 5 (2m28.471136107s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:43:10.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5765" for this suite. • [SLOW TEST:152.613 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":288,"completed":195,"skipped":3279,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:43:10.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 20 00:43:10.426: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. May 20 00:43:11.360: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 20 00:43:13.842: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725532191, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725532191, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725532191, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725532191, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 00:43:15.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725532191, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725532191, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725532191, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725532191, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 00:43:18.476: INFO: Waited 622.42886ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:43:18.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4964" for this suite. • [SLOW TEST:8.639 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":288,"completed":196,"skipped":3309,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:43:19.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 20 00:43:19.509: INFO: namespace kubectl-8118 May 20 00:43:19.509: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8118' May 20 00:43:19.966: INFO: stderr: "" May 20 00:43:19.966: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 20 00:43:20.972: INFO: Selector matched 1 pods for map[app:agnhost] May 20 00:43:20.972: INFO: Found 0 / 1 May 20 00:43:21.970: INFO: Selector matched 1 pods for map[app:agnhost] May 20 00:43:21.970: INFO: Found 0 / 1 May 20 00:43:22.971: INFO: Selector matched 1 pods for map[app:agnhost] May 20 00:43:22.971: INFO: Found 1 / 1 May 20 00:43:22.971: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 20 00:43:22.975: INFO: Selector matched 1 pods for map[app:agnhost] May 20 00:43:22.975: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 20 00:43:22.975: INFO: wait on agnhost-master startup in kubectl-8118 May 20 00:43:22.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-wc6m5 agnhost-master --namespace=kubectl-8118' May 20 00:43:23.103: INFO: stderr: "" May 20 00:43:23.103: INFO: stdout: "Paused\n" STEP: exposing RC May 20 00:43:23.103: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8118' May 20 00:43:23.276: INFO: stderr: "" May 20 00:43:23.276: INFO: stdout: "service/rm2 exposed\n" May 20 00:43:23.310: INFO: Service rm2 in namespace kubectl-8118 found. STEP: exposing service May 20 00:43:25.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8118' May 20 00:43:25.494: INFO: stderr: "" May 20 00:43:25.494: INFO: stdout: "service/rm3 exposed\n" May 20 00:43:25.525: INFO: Service rm3 in namespace kubectl-8118 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:43:27.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8118" for this suite. • [SLOW TEST:8.517 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":288,"completed":197,"skipped":3313,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:43:27.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 00:43:27.594: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 00:43:27.603: INFO: Waiting for terminating namespaces to be deleted... May 20 00:43:27.605: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 20 00:43:27.609: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 20 00:43:27.609: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 20 00:43:27.609: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 20 00:43:27.609: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 20 00:43:27.609: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 20 00:43:27.609: INFO: Container kindnet-cni ready: true, restart count 0 May 20 00:43:27.609: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 20 00:43:27.609: INFO: Container kube-proxy ready: true, restart count 0 May 20 00:43:27.609: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 20 00:43:27.641: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 20 00:43:27.641: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 20 00:43:27.641: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 20 00:43:27.641: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 20 00:43:27.641: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 20 00:43:27.641: INFO: Container kindnet-cni ready: true, restart count 0 May 20 00:43:27.641: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 20 00:43:27.641: INFO: Container kube-proxy ready: true, restart count 0 May 20 00:43:27.641: INFO: agnhost-master-wc6m5 from kubectl-8118 started at 2020-05-20 00:43:20 +0000 UTC (1 container statuses recorded) May 20 00:43:27.641: INFO: Container agnhost-master ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-fd37ab56-d64b-4b78-be5e-dfe2f8b95e8c 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-fd37ab56-d64b-4b78-be5e-dfe2f8b95e8c off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-fd37ab56-d64b-4b78-be5e-dfe2f8b95e8c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:43:45.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-834" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:18.387 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":288,"completed":198,"skipped":3313,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:43:45.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:43:57.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1177" for this suite. • [SLOW TEST:11.250 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":288,"completed":199,"skipped":3320,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:43:57.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 20 00:44:01.312: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:44:01.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9699" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":200,"skipped":3339,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:44:01.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-8d6a000d-5f9b-45be-a2c0-faec9a63952a STEP: Creating a pod to test consume secrets May 20 00:44:01.562: INFO: Waiting up to 5m0s for pod "pod-secrets-af3aa426-606c-4315-84f8-abe974036dd5" in namespace "secrets-7021" to be "Succeeded or Failed" May 20 00:44:01.568: INFO: Pod "pod-secrets-af3aa426-606c-4315-84f8-abe974036dd5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.549362ms May 20 00:44:03.602: INFO: Pod "pod-secrets-af3aa426-606c-4315-84f8-abe974036dd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039827517s May 20 00:44:05.606: INFO: Pod "pod-secrets-af3aa426-606c-4315-84f8-abe974036dd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043686013s STEP: Saw pod success May 20 00:44:05.606: INFO: Pod "pod-secrets-af3aa426-606c-4315-84f8-abe974036dd5" satisfied condition "Succeeded or Failed" May 20 00:44:05.609: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-af3aa426-606c-4315-84f8-abe974036dd5 container secret-volume-test: STEP: delete the pod May 20 00:44:05.666: INFO: Waiting for pod pod-secrets-af3aa426-606c-4315-84f8-abe974036dd5 to disappear May 20 00:44:05.670: INFO: Pod pod-secrets-af3aa426-606c-4315-84f8-abe974036dd5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:44:05.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7021" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":201,"skipped":3342,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:44:05.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 20 00:44:05.739: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:44:11.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8561" for this suite. • [SLOW TEST:6.093 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":288,"completed":202,"skipped":3347,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:44:11.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 20 00:44:11.881: INFO: Waiting up to 5m0s for pod "pod-e02f953e-d326-490b-bf9c-daab1a5b9f9b" in namespace "emptydir-5862" to be "Succeeded or Failed" May 20 00:44:11.896: INFO: Pod "pod-e02f953e-d326-490b-bf9c-daab1a5b9f9b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.560694ms May 20 00:44:13.901: INFO: Pod "pod-e02f953e-d326-490b-bf9c-daab1a5b9f9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01973531s May 20 00:44:15.906: INFO: Pod "pod-e02f953e-d326-490b-bf9c-daab1a5b9f9b": Phase="Running", Reason="", readiness=true. Elapsed: 4.024410667s May 20 00:44:17.910: INFO: Pod "pod-e02f953e-d326-490b-bf9c-daab1a5b9f9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029082695s STEP: Saw pod success May 20 00:44:17.910: INFO: Pod "pod-e02f953e-d326-490b-bf9c-daab1a5b9f9b" satisfied condition "Succeeded or Failed" May 20 00:44:17.914: INFO: Trying to get logs from node latest-worker pod pod-e02f953e-d326-490b-bf9c-daab1a5b9f9b container test-container: STEP: delete the pod May 20 00:44:17.960: INFO: Waiting for pod pod-e02f953e-d326-490b-bf9c-daab1a5b9f9b to disappear May 20 00:44:17.976: INFO: Pod pod-e02f953e-d326-490b-bf9c-daab1a5b9f9b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:44:17.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5862" for this suite. • [SLOW TEST:6.212 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":203,"skipped":3370,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:44:17.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-65320a11-7cc8-47b8-849a-420c04cefc58 STEP: Creating configMap with name cm-test-opt-upd-89fcf898-beca-4d82-a8c7-8e8f854892cb STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-65320a11-7cc8-47b8-849a-420c04cefc58 STEP: Updating configmap cm-test-opt-upd-89fcf898-beca-4d82-a8c7-8e8f854892cb STEP: Creating configMap with name cm-test-opt-create-43214e20-8ed0-4408-b0a2-e6cd2d316316 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:44:26.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7097" for this suite. • [SLOW TEST:8.298 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":204,"skipped":3391,"failed":0} SSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:44:26.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:44:26.347: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-5bf9c812-0580-45f5-a451-399a94757175" in namespace "security-context-test-9105" to be "Succeeded or Failed" May 20 00:44:26.350: INFO: Pod "busybox-readonly-false-5bf9c812-0580-45f5-a451-399a94757175": Phase="Pending", Reason="", readiness=false. Elapsed: 3.165214ms May 20 00:44:28.362: INFO: Pod "busybox-readonly-false-5bf9c812-0580-45f5-a451-399a94757175": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015873732s May 20 00:44:30.366: INFO: Pod "busybox-readonly-false-5bf9c812-0580-45f5-a451-399a94757175": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018955446s May 20 00:44:30.366: INFO: Pod "busybox-readonly-false-5bf9c812-0580-45f5-a451-399a94757175" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:44:30.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9105" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":288,"completed":205,"skipped":3396,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:44:30.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 00:44:30.966: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 00:44:32.976: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725532271, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725532271, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725532271, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725532270, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 00:44:34.997: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725532271, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725532271, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725532271, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725532270, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 00:44:38.041: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:44:38.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9404" for this suite. STEP: Destroying namespace "webhook-9404-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.928 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":288,"completed":206,"skipped":3442,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:44:38.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 20 00:44:38.411: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2068 /api/v1/namespaces/watch-2068/configmaps/e2e-watch-test-label-changed c1b7231d-0c8a-4706-9941-154cc6017915 6096137 0 2020-05-20 00:44:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-20 00:44:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 20 00:44:38.411: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2068 /api/v1/namespaces/watch-2068/configmaps/e2e-watch-test-label-changed c1b7231d-0c8a-4706-9941-154cc6017915 6096138 0 2020-05-20 00:44:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-20 00:44:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 20 00:44:38.411: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2068 /api/v1/namespaces/watch-2068/configmaps/e2e-watch-test-label-changed c1b7231d-0c8a-4706-9941-154cc6017915 6096139 0 2020-05-20 00:44:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-20 00:44:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 20 00:44:48.484: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2068 /api/v1/namespaces/watch-2068/configmaps/e2e-watch-test-label-changed c1b7231d-0c8a-4706-9941-154cc6017915 6096188 0 2020-05-20 00:44:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-20 00:44:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 20 00:44:48.484: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2068 /api/v1/namespaces/watch-2068/configmaps/e2e-watch-test-label-changed c1b7231d-0c8a-4706-9941-154cc6017915 6096189 0 2020-05-20 00:44:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-20 00:44:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 20 00:44:48.485: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2068 /api/v1/namespaces/watch-2068/configmaps/e2e-watch-test-label-changed c1b7231d-0c8a-4706-9941-154cc6017915 6096190 0 2020-05-20 00:44:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-20 00:44:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:44:48.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2068" for this suite. • [SLOW TEST:10.211 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":288,"completed":207,"skipped":3445,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:44:48.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0520 00:45:29.210148 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 20 00:45:29.210: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:45:29.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7387" for this suite. • [SLOW TEST:40.705 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":288,"completed":208,"skipped":3491,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:45:29.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-765 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 20 00:45:29.349: INFO: Found 0 stateful pods, waiting for 3 May 20 00:45:39.418: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 20 00:45:39.418: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 20 00:45:39.418: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 20 00:45:49.354: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 20 00:45:49.354: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 20 00:45:49.354: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 20 00:45:49.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-765 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 20 00:45:52.349: INFO: stderr: "I0520 00:45:52.228322 2867 log.go:172] (0xc000c6c0b0) (0xc000391040) Create stream\nI0520 00:45:52.228363 2867 log.go:172] (0xc000c6c0b0) (0xc000391040) Stream added, broadcasting: 1\nI0520 00:45:52.231249 2867 log.go:172] (0xc000c6c0b0) Reply frame received for 1\nI0520 00:45:52.231286 2867 log.go:172] (0xc000c6c0b0) (0xc000494140) Create stream\nI0520 00:45:52.231297 2867 log.go:172] (0xc000c6c0b0) (0xc000494140) Stream added, broadcasting: 3\nI0520 00:45:52.232350 2867 log.go:172] (0xc000c6c0b0) Reply frame received for 3\nI0520 00:45:52.232396 2867 log.go:172] (0xc000c6c0b0) (0xc00054c280) Create stream\nI0520 00:45:52.232409 2867 log.go:172] (0xc000c6c0b0) (0xc00054c280) Stream added, broadcasting: 5\nI0520 00:45:52.233812 2867 log.go:172] (0xc000c6c0b0) Reply frame received for 5\nI0520 00:45:52.311152 2867 log.go:172] (0xc000c6c0b0) Data frame received for 5\nI0520 00:45:52.311183 2867 log.go:172] (0xc00054c280) (5) Data frame handling\nI0520 00:45:52.311196 2867 log.go:172] (0xc00054c280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0520 00:45:52.341076 2867 log.go:172] (0xc000c6c0b0) Data frame received for 5\nI0520 00:45:52.341280 2867 log.go:172] (0xc00054c280) (5) Data frame handling\nI0520 00:45:52.341313 2867 log.go:172] (0xc000c6c0b0) Data frame received for 3\nI0520 00:45:52.341330 2867 log.go:172] (0xc000494140) (3) Data frame handling\nI0520 00:45:52.341351 2867 log.go:172] (0xc000494140) (3) Data frame sent\nI0520 00:45:52.342087 2867 log.go:172] (0xc000c6c0b0) Data frame received for 3\nI0520 00:45:52.342117 2867 log.go:172] (0xc000494140) (3) Data frame handling\nI0520 00:45:52.344031 2867 log.go:172] (0xc000c6c0b0) Data frame received for 1\nI0520 00:45:52.344046 2867 log.go:172] (0xc000391040) (1) Data frame handling\nI0520 00:45:52.344053 2867 log.go:172] (0xc000391040) (1) Data frame sent\nI0520 00:45:52.344061 2867 log.go:172] (0xc000c6c0b0) (0xc000391040) Stream removed, broadcasting: 1\nI0520 00:45:52.344073 2867 log.go:172] (0xc000c6c0b0) Go away received\nI0520 00:45:52.344531 2867 log.go:172] (0xc000c6c0b0) (0xc000391040) Stream removed, broadcasting: 1\nI0520 00:45:52.344549 2867 log.go:172] (0xc000c6c0b0) (0xc000494140) Stream removed, broadcasting: 3\nI0520 00:45:52.344558 2867 log.go:172] (0xc000c6c0b0) (0xc00054c280) Stream removed, broadcasting: 5\n" May 20 00:45:52.349: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 20 00:45:52.349: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 20 00:46:02.383: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 20 00:46:12.402: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-765 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 20 00:46:12.628: INFO: stderr: "I0520 00:46:12.543589 2901 log.go:172] (0xc000b42f20) (0xc00013b900) Create stream\nI0520 00:46:12.543659 2901 log.go:172] (0xc000b42f20) (0xc00013b900) Stream added, broadcasting: 1\nI0520 00:46:12.547218 2901 log.go:172] (0xc000b42f20) Reply frame received for 1\nI0520 00:46:12.547273 2901 log.go:172] (0xc000b42f20) (0xc00063a320) Create stream\nI0520 00:46:12.547295 2901 log.go:172] (0xc000b42f20) (0xc00063a320) Stream added, broadcasting: 3\nI0520 00:46:12.548187 2901 log.go:172] (0xc000b42f20) Reply frame received for 3\nI0520 00:46:12.548225 2901 log.go:172] (0xc000b42f20) (0xc000332280) Create stream\nI0520 00:46:12.548239 2901 log.go:172] (0xc000b42f20) (0xc000332280) Stream added, broadcasting: 5\nI0520 00:46:12.549084 2901 log.go:172] (0xc000b42f20) Reply frame received for 5\nI0520 00:46:12.620347 2901 log.go:172] (0xc000b42f20) Data frame received for 3\nI0520 00:46:12.620424 2901 log.go:172] (0xc00063a320) (3) Data frame handling\nI0520 00:46:12.620441 2901 log.go:172] (0xc00063a320) (3) Data frame sent\nI0520 00:46:12.620454 2901 log.go:172] (0xc000b42f20) Data frame received for 3\nI0520 00:46:12.620469 2901 log.go:172] (0xc000b42f20) Data frame received for 5\nI0520 00:46:12.620495 2901 log.go:172] (0xc000332280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0520 00:46:12.620511 2901 log.go:172] (0xc00063a320) (3) Data frame handling\nI0520 00:46:12.620543 2901 log.go:172] (0xc000332280) (5) Data frame sent\nI0520 00:46:12.620563 2901 log.go:172] (0xc000b42f20) Data frame received for 5\nI0520 00:46:12.620574 2901 log.go:172] (0xc000332280) (5) Data frame handling\nI0520 00:46:12.621869 2901 log.go:172] (0xc000b42f20) Data frame received for 1\nI0520 00:46:12.621893 2901 log.go:172] (0xc00013b900) (1) Data frame handling\nI0520 00:46:12.621904 2901 log.go:172] (0xc00013b900) (1) Data frame sent\nI0520 00:46:12.621920 2901 log.go:172] (0xc000b42f20) (0xc00013b900) Stream removed, broadcasting: 1\nI0520 00:46:12.621934 2901 log.go:172] (0xc000b42f20) Go away received\nI0520 00:46:12.622244 2901 log.go:172] (0xc000b42f20) (0xc00013b900) Stream removed, broadcasting: 1\nI0520 00:46:12.622268 2901 log.go:172] (0xc000b42f20) (0xc00063a320) Stream removed, broadcasting: 3\nI0520 00:46:12.622277 2901 log.go:172] (0xc000b42f20) (0xc000332280) Stream removed, broadcasting: 5\n" May 20 00:46:12.628: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 20 00:46:12.628: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 20 00:46:22.650: INFO: Waiting for StatefulSet statefulset-765/ss2 to complete update May 20 00:46:22.650: INFO: Waiting for Pod statefulset-765/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 20 00:46:22.650: INFO: Waiting for Pod statefulset-765/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 20 00:46:22.650: INFO: Waiting for Pod statefulset-765/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 20 00:46:32.658: INFO: Waiting for StatefulSet statefulset-765/ss2 to complete update May 20 00:46:32.658: INFO: Waiting for Pod statefulset-765/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 20 00:46:42.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-765 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 20 00:46:42.927: INFO: stderr: "I0520 00:46:42.789453 2921 log.go:172] (0xc000ad9340) (0xc0000f2fa0) Create stream\nI0520 00:46:42.789511 2921 log.go:172] (0xc000ad9340) (0xc0000f2fa0) Stream added, broadcasting: 1\nI0520 00:46:42.791268 2921 log.go:172] (0xc000ad9340) Reply frame received for 1\nI0520 00:46:42.791303 2921 log.go:172] (0xc000ad9340) (0xc00076ae60) Create stream\nI0520 00:46:42.791312 2921 log.go:172] (0xc000ad9340) (0xc00076ae60) Stream added, broadcasting: 3\nI0520 00:46:42.792002 2921 log.go:172] (0xc000ad9340) Reply frame received for 3\nI0520 00:46:42.792023 2921 log.go:172] (0xc000ad9340) (0xc0002a0640) Create stream\nI0520 00:46:42.792029 2921 log.go:172] (0xc000ad9340) (0xc0002a0640) Stream added, broadcasting: 5\nI0520 00:46:42.792732 2921 log.go:172] (0xc000ad9340) Reply frame received for 5\nI0520 00:46:42.891935 2921 log.go:172] (0xc000ad9340) Data frame received for 5\nI0520 00:46:42.891988 2921 log.go:172] (0xc0002a0640) (5) Data frame handling\nI0520 00:46:42.892030 2921 log.go:172] (0xc0002a0640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0520 00:46:42.921723 2921 log.go:172] (0xc000ad9340) Data frame received for 3\nI0520 00:46:42.921759 2921 log.go:172] (0xc00076ae60) (3) Data frame handling\nI0520 00:46:42.921769 2921 log.go:172] (0xc00076ae60) (3) Data frame sent\nI0520 00:46:42.921775 2921 log.go:172] (0xc000ad9340) Data frame received for 3\nI0520 00:46:42.921779 2921 log.go:172] (0xc00076ae60) (3) Data frame handling\nI0520 00:46:42.921800 2921 log.go:172] (0xc000ad9340) Data frame received for 5\nI0520 00:46:42.921805 2921 log.go:172] (0xc0002a0640) (5) Data frame handling\nI0520 00:46:42.923592 2921 log.go:172] (0xc000ad9340) Data frame received for 1\nI0520 00:46:42.923613 2921 log.go:172] (0xc0000f2fa0) (1) Data frame handling\nI0520 00:46:42.923623 2921 log.go:172] (0xc0000f2fa0) (1) Data frame sent\nI0520 00:46:42.923632 2921 log.go:172] (0xc000ad9340) (0xc0000f2fa0) Stream removed, broadcasting: 1\nI0520 00:46:42.923710 2921 log.go:172] (0xc000ad9340) Go away received\nI0520 00:46:42.923968 2921 log.go:172] (0xc000ad9340) (0xc0000f2fa0) Stream removed, broadcasting: 1\nI0520 00:46:42.923981 2921 log.go:172] (0xc000ad9340) (0xc00076ae60) Stream removed, broadcasting: 3\nI0520 00:46:42.923986 2921 log.go:172] (0xc000ad9340) (0xc0002a0640) Stream removed, broadcasting: 5\n" May 20 00:46:42.927: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 20 00:46:42.927: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 20 00:46:52.968: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 20 00:47:03.033: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-765 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 20 00:47:03.268: INFO: stderr: "I0520 00:47:03.164377 2941 log.go:172] (0xc0004cd080) (0xc00091c6e0) Create stream\nI0520 00:47:03.164441 2941 log.go:172] (0xc0004cd080) (0xc00091c6e0) Stream added, broadcasting: 1\nI0520 00:47:03.169361 2941 log.go:172] (0xc0004cd080) Reply frame received for 1\nI0520 00:47:03.169398 2941 log.go:172] (0xc0004cd080) (0xc000394d20) Create stream\nI0520 00:47:03.169407 2941 log.go:172] (0xc0004cd080) (0xc000394d20) Stream added, broadcasting: 3\nI0520 00:47:03.170484 2941 log.go:172] (0xc0004cd080) Reply frame received for 3\nI0520 00:47:03.170541 2941 log.go:172] (0xc0004cd080) (0xc00025e000) Create stream\nI0520 00:47:03.170554 2941 log.go:172] (0xc0004cd080) (0xc00025e000) Stream added, broadcasting: 5\nI0520 00:47:03.171619 2941 log.go:172] (0xc0004cd080) Reply frame received for 5\nI0520 00:47:03.262223 2941 log.go:172] (0xc0004cd080) Data frame received for 3\nI0520 00:47:03.262281 2941 log.go:172] (0xc000394d20) (3) Data frame handling\nI0520 00:47:03.262311 2941 log.go:172] (0xc000394d20) (3) Data frame sent\nI0520 00:47:03.262363 2941 log.go:172] (0xc0004cd080) Data frame received for 5\nI0520 00:47:03.262397 2941 log.go:172] (0xc00025e000) (5) Data frame handling\nI0520 00:47:03.262407 2941 log.go:172] (0xc00025e000) (5) Data frame sent\nI0520 00:47:03.262414 2941 log.go:172] (0xc0004cd080) Data frame received for 5\nI0520 00:47:03.262420 2941 log.go:172] (0xc00025e000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0520 00:47:03.262432 2941 log.go:172] (0xc0004cd080) Data frame received for 3\nI0520 00:47:03.262451 2941 log.go:172] (0xc000394d20) (3) Data frame handling\nI0520 00:47:03.263830 2941 log.go:172] (0xc0004cd080) Data frame received for 1\nI0520 00:47:03.263842 2941 log.go:172] (0xc00091c6e0) (1) Data frame handling\nI0520 00:47:03.263855 2941 log.go:172] (0xc00091c6e0) (1) Data frame sent\nI0520 00:47:03.263867 2941 log.go:172] (0xc0004cd080) (0xc00091c6e0) Stream removed, broadcasting: 1\nI0520 00:47:03.264005 2941 log.go:172] (0xc0004cd080) Go away received\nI0520 00:47:03.264127 2941 log.go:172] (0xc0004cd080) (0xc00091c6e0) Stream removed, broadcasting: 1\nI0520 00:47:03.264141 2941 log.go:172] (0xc0004cd080) (0xc000394d20) Stream removed, broadcasting: 3\nI0520 00:47:03.264148 2941 log.go:172] (0xc0004cd080) (0xc00025e000) Stream removed, broadcasting: 5\n" May 20 00:47:03.268: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 20 00:47:03.268: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 20 00:47:13.291: INFO: Waiting for StatefulSet statefulset-765/ss2 to complete update May 20 00:47:13.291: INFO: Waiting for Pod statefulset-765/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 20 00:47:13.291: INFO: Waiting for Pod statefulset-765/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 20 00:47:13.291: INFO: Waiting for Pod statefulset-765/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 20 00:47:23.299: INFO: Waiting for StatefulSet statefulset-765/ss2 to complete update May 20 00:47:23.299: INFO: Waiting for Pod statefulset-765/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 20 00:47:23.299: INFO: Waiting for Pod statefulset-765/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 20 00:47:33.299: INFO: Waiting for StatefulSet statefulset-765/ss2 to complete update May 20 00:47:33.299: INFO: Waiting for Pod statefulset-765/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 20 00:47:43.300: INFO: Deleting all statefulset in ns statefulset-765 May 20 00:47:43.303: INFO: Scaling statefulset ss2 to 0 May 20 00:48:13.353: INFO: Waiting for statefulset status.replicas updated to 0 May 20 00:48:13.356: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:48:13.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-765" for this suite. • [SLOW TEST:164.163 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":288,"completed":209,"skipped":3491,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:48:13.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:48:13.425: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 20 00:48:13.439: INFO: Pod name sample-pod: Found 0 pods out of 1 May 20 00:48:18.442: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 20 00:48:18.443: INFO: Creating deployment "test-rolling-update-deployment" May 20 00:48:18.447: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 20 00:48:18.523: INFO: deployment "test-rolling-update-deployment" doesn't have the required revision set May 20 00:48:20.668: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 20 00:48:20.671: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725532498, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725532498, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725532498, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725532498, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 00:48:22.694: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 20 00:48:22.705: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2022 /apis/apps/v1/namespaces/deployment-2022/deployments/test-rolling-update-deployment a0510d79-8528-4282-91e7-017d76eb0ba2 6097457 1 2020-05-20 00:48:18 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-20 00:48:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-20 00:48:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ac1fc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-20 00:48:18 +0000 UTC,LastTransitionTime:2020-05-20 00:48:18 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-05-20 00:48:22 +0000 UTC,LastTransitionTime:2020-05-20 00:48:18 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 20 00:48:22.708: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-2022 /apis/apps/v1/namespaces/deployment-2022/replicasets/test-rolling-update-deployment-df7bb669b 941a0f8d-c0b3-45b0-a110-0f5333dd0a7c 6097446 1 2020-05-20 00:48:18 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment a0510d79-8528-4282-91e7-017d76eb0ba2 0xc00244c540 0xc00244c541}] [] [{kube-controller-manager Update apps/v1 2020-05-20 00:48:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a0510d79-8528-4282-91e7-017d76eb0ba2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00244c5b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 20 00:48:22.708: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 20 00:48:22.708: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2022 /apis/apps/v1/namespaces/deployment-2022/replicasets/test-rolling-update-controller aeb766ae-527a-4efe-bddb-7bbb789e439c 6097456 2 2020-05-20 00:48:13 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment a0510d79-8528-4282-91e7-017d76eb0ba2 0xc00244c42f 0xc00244c440}] [] [{e2e.test Update apps/v1 2020-05-20 00:48:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-20 00:48:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a0510d79-8528-4282-91e7-017d76eb0ba2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00244c4d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 20 00:48:22.712: INFO: Pod "test-rolling-update-deployment-df7bb669b-wkqsl" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-wkqsl test-rolling-update-deployment-df7bb669b- deployment-2022 /api/v1/namespaces/deployment-2022/pods/test-rolling-update-deployment-df7bb669b-wkqsl 0738a45d-1c23-48f4-a170-9b1e6d85c740 6097445 0 2020-05-20 00:48:18 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b 941a0f8d-c0b3-45b0-a110-0f5333dd0a7c 0xc00244ca80 0xc00244ca81}] [] [{kube-controller-manager Update v1 2020-05-20 00:48:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"941a0f8d-c0b3-45b0-a110-0f5333dd0a7c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:48:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.225\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fdjzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fdjzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fdjzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:48:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:48:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:48:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:48:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.225,StartTime:2020-05-20 00:48:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 00:48:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://3d9ed89038c992377b8c52307d2ea628c6568f3b071f474ed97b4a0bf243b79a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.225,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:48:22.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2022" for this suite. • [SLOW TEST:9.338 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":210,"skipped":3498,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:48:22.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-7e1445fd-7c8e-47f1-90d0-c0a9fbb3a4ef STEP: Creating a pod to test consume secrets May 20 00:48:23.095: INFO: Waiting up to 5m0s for pod "pod-secrets-d1fc0e74-2bd5-4ffe-a12e-224148599ee5" in namespace "secrets-1633" to be "Succeeded or Failed" May 20 00:48:23.113: INFO: Pod "pod-secrets-d1fc0e74-2bd5-4ffe-a12e-224148599ee5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.048539ms May 20 00:48:25.117: INFO: Pod "pod-secrets-d1fc0e74-2bd5-4ffe-a12e-224148599ee5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022334585s May 20 00:48:27.161: INFO: Pod "pod-secrets-d1fc0e74-2bd5-4ffe-a12e-224148599ee5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066372804s STEP: Saw pod success May 20 00:48:27.161: INFO: Pod "pod-secrets-d1fc0e74-2bd5-4ffe-a12e-224148599ee5" satisfied condition "Succeeded or Failed" May 20 00:48:27.165: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-d1fc0e74-2bd5-4ffe-a12e-224148599ee5 container secret-volume-test: STEP: delete the pod May 20 00:48:27.391: INFO: Waiting for pod pod-secrets-d1fc0e74-2bd5-4ffe-a12e-224148599ee5 to disappear May 20 00:48:27.404: INFO: Pod pod-secrets-d1fc0e74-2bd5-4ffe-a12e-224148599ee5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:48:27.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1633" for this suite. STEP: Destroying namespace "secret-namespace-493" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":288,"completed":211,"skipped":3504,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:48:27.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 20 00:48:27.508: INFO: Waiting up to 5m0s for pod "downward-api-b1c1fe76-20e8-41b3-9139-563c3e228990" in namespace "downward-api-7625" to be "Succeeded or Failed" May 20 00:48:27.521: INFO: Pod "downward-api-b1c1fe76-20e8-41b3-9139-563c3e228990": Phase="Pending", Reason="", readiness=false. Elapsed: 13.219891ms May 20 00:48:29.525: INFO: Pod "downward-api-b1c1fe76-20e8-41b3-9139-563c3e228990": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017614464s May 20 00:48:31.530: INFO: Pod "downward-api-b1c1fe76-20e8-41b3-9139-563c3e228990": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022334711s STEP: Saw pod success May 20 00:48:31.530: INFO: Pod "downward-api-b1c1fe76-20e8-41b3-9139-563c3e228990" satisfied condition "Succeeded or Failed" May 20 00:48:31.534: INFO: Trying to get logs from node latest-worker2 pod downward-api-b1c1fe76-20e8-41b3-9139-563c3e228990 container dapi-container: STEP: delete the pod May 20 00:48:31.561: INFO: Waiting for pod downward-api-b1c1fe76-20e8-41b3-9139-563c3e228990 to disappear May 20 00:48:31.573: INFO: Pod downward-api-b1c1fe76-20e8-41b3-9139-563c3e228990 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:48:31.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7625" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":288,"completed":212,"skipped":3559,"failed":0} ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:48:31.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 20 00:48:31.628: INFO: Waiting up to 5m0s for pod "downward-api-82068370-0f94-450b-94f4-06a1efddeb80" in namespace "downward-api-8297" to be "Succeeded or Failed" May 20 00:48:31.633: INFO: Pod "downward-api-82068370-0f94-450b-94f4-06a1efddeb80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154218ms May 20 00:48:33.636: INFO: Pod "downward-api-82068370-0f94-450b-94f4-06a1efddeb80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00793754s May 20 00:48:35.640: INFO: Pod "downward-api-82068370-0f94-450b-94f4-06a1efddeb80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011994284s STEP: Saw pod success May 20 00:48:35.640: INFO: Pod "downward-api-82068370-0f94-450b-94f4-06a1efddeb80" satisfied condition "Succeeded or Failed" May 20 00:48:35.644: INFO: Trying to get logs from node latest-worker2 pod downward-api-82068370-0f94-450b-94f4-06a1efddeb80 container dapi-container: STEP: delete the pod May 20 00:48:35.676: INFO: Waiting for pod downward-api-82068370-0f94-450b-94f4-06a1efddeb80 to disappear May 20 00:48:35.680: INFO: Pod downward-api-82068370-0f94-450b-94f4-06a1efddeb80 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:48:35.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8297" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":288,"completed":213,"skipped":3559,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:48:35.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod May 20 00:50:36.328: INFO: Successfully updated pod "var-expansion-2d35866b-8cfd-4065-85d8-f1d6d9ae64b8" STEP: waiting for pod running STEP: deleting the pod gracefully May 20 00:50:38.368: INFO: Deleting pod "var-expansion-2d35866b-8cfd-4065-85d8-f1d6d9ae64b8" in namespace "var-expansion-8113" May 20 00:50:38.374: INFO: Wait up to 5m0s for pod "var-expansion-2d35866b-8cfd-4065-85d8-f1d6d9ae64b8" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:51:16.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8113" for this suite. • [SLOW TEST:160.734 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":288,"completed":214,"skipped":3570,"failed":0} [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:51:16.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:51:47.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3141" for this suite. STEP: Destroying namespace "nsdeletetest-7342" for this suite. May 20 00:51:47.726: INFO: Namespace nsdeletetest-7342 was already deleted STEP: Destroying namespace "nsdeletetest-1662" for this suite. • [SLOW TEST:31.306 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":288,"completed":215,"skipped":3570,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:51:47.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:51:47.825: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 20 00:51:50.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9686 create -f -' May 20 00:51:51.316: INFO: stderr: "" May 20 00:51:51.316: INFO: stdout: "e2e-test-crd-publish-openapi-4163-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 20 00:51:51.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9686 delete e2e-test-crd-publish-openapi-4163-crds test-cr' May 20 00:51:51.418: INFO: stderr: "" May 20 00:51:51.418: INFO: stdout: "e2e-test-crd-publish-openapi-4163-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 20 00:51:51.418: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9686 apply -f -' May 20 00:51:51.742: INFO: stderr: "" May 20 00:51:51.742: INFO: stdout: "e2e-test-crd-publish-openapi-4163-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 20 00:51:51.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9686 delete e2e-test-crd-publish-openapi-4163-crds test-cr' May 20 00:51:51.867: INFO: stderr: "" May 20 00:51:51.867: INFO: stdout: "e2e-test-crd-publish-openapi-4163-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 20 00:51:51.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4163-crds' May 20 00:51:52.108: INFO: stderr: "" May 20 00:51:52.108: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4163-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:51:55.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9686" for this suite. • [SLOW TEST:7.338 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":288,"completed":216,"skipped":3607,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:51:55.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1358 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-1358 STEP: Creating statefulset with conflicting port in namespace statefulset-1358 STEP: Waiting until pod test-pod will start running in namespace statefulset-1358 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1358 May 20 00:51:59.243: INFO: Observed stateful pod in namespace: statefulset-1358, name: ss-0, uid: c7b74e5b-0210-450f-a874-b8d834db1781, status phase: Pending. Waiting for statefulset controller to delete. May 20 00:51:59.796: INFO: Observed stateful pod in namespace: statefulset-1358, name: ss-0, uid: c7b74e5b-0210-450f-a874-b8d834db1781, status phase: Failed. Waiting for statefulset controller to delete. May 20 00:51:59.806: INFO: Observed stateful pod in namespace: statefulset-1358, name: ss-0, uid: c7b74e5b-0210-450f-a874-b8d834db1781, status phase: Failed. Waiting for statefulset controller to delete. May 20 00:51:59.853: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1358 STEP: Removing pod with conflicting port in namespace statefulset-1358 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1358 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 20 00:52:05.950: INFO: Deleting all statefulset in ns statefulset-1358 May 20 00:52:05.954: INFO: Scaling statefulset ss to 0 May 20 00:52:15.985: INFO: Waiting for statefulset status.replicas updated to 0 May 20 00:52:15.988: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:52:16.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1358" for this suite. • [SLOW TEST:20.971 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":288,"completed":217,"skipped":3610,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:52:16.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 20 00:52:16.106: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9b946d08-756f-4a4f-9d9a-62a276c95fd9" in namespace "projected-9543" to be "Succeeded or Failed" May 20 00:52:16.110: INFO: Pod "downwardapi-volume-9b946d08-756f-4a4f-9d9a-62a276c95fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166173ms May 20 00:52:18.114: INFO: Pod "downwardapi-volume-9b946d08-756f-4a4f-9d9a-62a276c95fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008424229s May 20 00:52:20.134: INFO: Pod "downwardapi-volume-9b946d08-756f-4a4f-9d9a-62a276c95fd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028463641s STEP: Saw pod success May 20 00:52:20.134: INFO: Pod "downwardapi-volume-9b946d08-756f-4a4f-9d9a-62a276c95fd9" satisfied condition "Succeeded or Failed" May 20 00:52:20.137: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-9b946d08-756f-4a4f-9d9a-62a276c95fd9 container client-container: STEP: delete the pod May 20 00:52:20.229: INFO: Waiting for pod downwardapi-volume-9b946d08-756f-4a4f-9d9a-62a276c95fd9 to disappear May 20 00:52:20.322: INFO: Pod downwardapi-volume-9b946d08-756f-4a4f-9d9a-62a276c95fd9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:52:20.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9543" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":218,"skipped":3640,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:52:20.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-fe1db959-4cbd-4a88-899a-319f2d40aab5 STEP: Creating a pod to test consume secrets May 20 00:52:20.400: INFO: Waiting up to 5m0s for pod "pod-secrets-3ae40cdc-b369-407f-9e42-215aa70a6ecf" in namespace "secrets-4204" to be "Succeeded or Failed" May 20 00:52:20.404: INFO: Pod "pod-secrets-3ae40cdc-b369-407f-9e42-215aa70a6ecf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.638153ms May 20 00:52:22.408: INFO: Pod "pod-secrets-3ae40cdc-b369-407f-9e42-215aa70a6ecf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008353886s May 20 00:52:24.413: INFO: Pod "pod-secrets-3ae40cdc-b369-407f-9e42-215aa70a6ecf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013289455s STEP: Saw pod success May 20 00:52:24.413: INFO: Pod "pod-secrets-3ae40cdc-b369-407f-9e42-215aa70a6ecf" satisfied condition "Succeeded or Failed" May 20 00:52:24.416: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-3ae40cdc-b369-407f-9e42-215aa70a6ecf container secret-volume-test: STEP: delete the pod May 20 00:52:24.458: INFO: Waiting for pod pod-secrets-3ae40cdc-b369-407f-9e42-215aa70a6ecf to disappear May 20 00:52:24.667: INFO: Pod pod-secrets-3ae40cdc-b369-407f-9e42-215aa70a6ecf no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:52:24.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4204" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":219,"skipped":3702,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:52:24.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 20 00:52:29.882: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:52:29.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8527" for this suite. • [SLOW TEST:5.346 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":288,"completed":220,"skipped":3726,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:52:30.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 20 00:52:30.162: INFO: Waiting up to 5m0s for pod "pod-c7ee2d55-b664-428d-ada3-8f53b1eb9f25" in namespace "emptydir-2383" to be "Succeeded or Failed" May 20 00:52:30.241: INFO: Pod "pod-c7ee2d55-b664-428d-ada3-8f53b1eb9f25": Phase="Pending", Reason="", readiness=false. Elapsed: 79.528465ms May 20 00:52:32.246: INFO: Pod "pod-c7ee2d55-b664-428d-ada3-8f53b1eb9f25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083741661s May 20 00:52:34.255: INFO: Pod "pod-c7ee2d55-b664-428d-ada3-8f53b1eb9f25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093054023s STEP: Saw pod success May 20 00:52:34.255: INFO: Pod "pod-c7ee2d55-b664-428d-ada3-8f53b1eb9f25" satisfied condition "Succeeded or Failed" May 20 00:52:34.258: INFO: Trying to get logs from node latest-worker pod pod-c7ee2d55-b664-428d-ada3-8f53b1eb9f25 container test-container: STEP: delete the pod May 20 00:52:34.315: INFO: Waiting for pod pod-c7ee2d55-b664-428d-ada3-8f53b1eb9f25 to disappear May 20 00:52:34.326: INFO: Pod pod-c7ee2d55-b664-428d-ada3-8f53b1eb9f25 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:52:34.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2383" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":221,"skipped":3733,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:52:34.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-ab0e9889-8a7e-4f15-b4a3-5636345200fe STEP: Creating a pod to test consume secrets May 20 00:52:34.700: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-71658991-bf81-478d-b0c9-ba7ed7e5958e" in namespace "projected-167" to be "Succeeded or Failed" May 20 00:52:34.704: INFO: Pod "pod-projected-secrets-71658991-bf81-478d-b0c9-ba7ed7e5958e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.424669ms May 20 00:52:36.707: INFO: Pod "pod-projected-secrets-71658991-bf81-478d-b0c9-ba7ed7e5958e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006663259s May 20 00:52:38.710: INFO: Pod "pod-projected-secrets-71658991-bf81-478d-b0c9-ba7ed7e5958e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009733621s STEP: Saw pod success May 20 00:52:38.710: INFO: Pod "pod-projected-secrets-71658991-bf81-478d-b0c9-ba7ed7e5958e" satisfied condition "Succeeded or Failed" May 20 00:52:38.722: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-71658991-bf81-478d-b0c9-ba7ed7e5958e container projected-secret-volume-test: STEP: delete the pod May 20 00:52:38.770: INFO: Waiting for pod pod-projected-secrets-71658991-bf81-478d-b0c9-ba7ed7e5958e to disappear May 20 00:52:38.776: INFO: Pod pod-projected-secrets-71658991-bf81-478d-b0c9-ba7ed7e5958e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:52:38.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-167" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":222,"skipped":3740,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:52:38.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 20 00:52:43.083: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-6616 PodName:var-expansion-91d1c717-0a51-4b44-8ca9-0fbf31c34e5d ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 00:52:43.083: INFO: >>> kubeConfig: /root/.kube/config I0520 00:52:43.120247 7 log.go:172] (0xc002fe8790) (0xc001ca1040) Create stream I0520 00:52:43.120289 7 log.go:172] (0xc002fe8790) (0xc001ca1040) Stream added, broadcasting: 1 I0520 00:52:43.122350 7 log.go:172] (0xc002fe8790) Reply frame received for 1 I0520 00:52:43.122395 7 log.go:172] (0xc002fe8790) (0xc001ca1180) Create stream I0520 00:52:43.122406 7 log.go:172] (0xc002fe8790) (0xc001ca1180) Stream added, broadcasting: 3 I0520 00:52:43.123385 7 log.go:172] (0xc002fe8790) Reply frame received for 3 I0520 00:52:43.123441 7 log.go:172] (0xc002fe8790) (0xc001ca1220) Create stream I0520 00:52:43.123464 7 log.go:172] (0xc002fe8790) (0xc001ca1220) Stream added, broadcasting: 5 I0520 00:52:43.124368 7 log.go:172] (0xc002fe8790) Reply frame received for 5 I0520 00:52:43.197974 7 log.go:172] (0xc002fe8790) Data frame received for 3 I0520 00:52:43.198017 7 log.go:172] (0xc001ca1180) (3) Data frame handling I0520 00:52:43.198225 7 log.go:172] (0xc002fe8790) Data frame received for 5 I0520 00:52:43.198250 7 log.go:172] (0xc001ca1220) (5) Data frame handling I0520 00:52:43.199677 7 log.go:172] (0xc002fe8790) Data frame received for 1 I0520 00:52:43.199729 7 log.go:172] (0xc001ca1040) (1) Data frame handling I0520 00:52:43.199748 7 log.go:172] (0xc001ca1040) (1) Data frame sent I0520 00:52:43.199760 7 log.go:172] (0xc002fe8790) (0xc001ca1040) Stream removed, broadcasting: 1 I0520 00:52:43.199776 7 log.go:172] (0xc002fe8790) Go away received I0520 00:52:43.199949 7 log.go:172] (0xc002fe8790) (0xc001ca1040) Stream removed, broadcasting: 1 I0520 00:52:43.199968 7 log.go:172] (0xc002fe8790) (0xc001ca1180) Stream removed, broadcasting: 3 I0520 00:52:43.199978 7 log.go:172] (0xc002fe8790) (0xc001ca1220) Stream removed, broadcasting: 5 STEP: test for file in mounted path May 20 00:52:43.203: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-6616 PodName:var-expansion-91d1c717-0a51-4b44-8ca9-0fbf31c34e5d ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 00:52:43.203: INFO: >>> kubeConfig: /root/.kube/config I0520 00:52:43.224648 7 log.go:172] (0xc002fe8e70) (0xc001ca1f40) Create stream I0520 00:52:43.224680 7 log.go:172] (0xc002fe8e70) (0xc001ca1f40) Stream added, broadcasting: 1 I0520 00:52:43.226604 7 log.go:172] (0xc002fe8e70) Reply frame received for 1 I0520 00:52:43.226642 7 log.go:172] (0xc002fe8e70) (0xc002a360a0) Create stream I0520 00:52:43.226652 7 log.go:172] (0xc002fe8e70) (0xc002a360a0) Stream added, broadcasting: 3 I0520 00:52:43.227440 7 log.go:172] (0xc002fe8e70) Reply frame received for 3 I0520 00:52:43.227465 7 log.go:172] (0xc002fe8e70) (0xc0007986e0) Create stream I0520 00:52:43.227478 7 log.go:172] (0xc002fe8e70) (0xc0007986e0) Stream added, broadcasting: 5 I0520 00:52:43.228238 7 log.go:172] (0xc002fe8e70) Reply frame received for 5 I0520 00:52:43.303257 7 log.go:172] (0xc002fe8e70) Data frame received for 5 I0520 00:52:43.303336 7 log.go:172] (0xc0007986e0) (5) Data frame handling I0520 00:52:43.303367 7 log.go:172] (0xc002fe8e70) Data frame received for 3 I0520 00:52:43.303380 7 log.go:172] (0xc002a360a0) (3) Data frame handling I0520 00:52:43.304488 7 log.go:172] (0xc002fe8e70) Data frame received for 1 I0520 00:52:43.304536 7 log.go:172] (0xc001ca1f40) (1) Data frame handling I0520 00:52:43.304564 7 log.go:172] (0xc001ca1f40) (1) Data frame sent I0520 00:52:43.304593 7 log.go:172] (0xc002fe8e70) (0xc001ca1f40) Stream removed, broadcasting: 1 I0520 00:52:43.304613 7 log.go:172] (0xc002fe8e70) Go away received I0520 00:52:43.304697 7 log.go:172] (0xc002fe8e70) (0xc001ca1f40) Stream removed, broadcasting: 1 I0520 00:52:43.304723 7 log.go:172] (0xc002fe8e70) (0xc002a360a0) Stream removed, broadcasting: 3 I0520 00:52:43.304742 7 log.go:172] (0xc002fe8e70) (0xc0007986e0) Stream removed, broadcasting: 5 STEP: updating the annotation value May 20 00:52:43.814: INFO: Successfully updated pod "var-expansion-91d1c717-0a51-4b44-8ca9-0fbf31c34e5d" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 20 00:52:43.824: INFO: Deleting pod "var-expansion-91d1c717-0a51-4b44-8ca9-0fbf31c34e5d" in namespace "var-expansion-6616" May 20 00:52:43.827: INFO: Wait up to 5m0s for pod "var-expansion-91d1c717-0a51-4b44-8ca9-0fbf31c34e5d" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:53:25.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6616" for this suite. • [SLOW TEST:47.070 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":288,"completed":223,"skipped":3779,"failed":0} SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:53:25.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 20 00:53:25.941: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:53:34.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7810" for this suite. • [SLOW TEST:8.596 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":288,"completed":224,"skipped":3787,"failed":0} S ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:53:34.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-szc7k in namespace proxy-5370 I0520 00:53:34.580937 7 runners.go:190] Created replication controller with name: proxy-service-szc7k, namespace: proxy-5370, replica count: 1 I0520 00:53:35.631532 7 runners.go:190] proxy-service-szc7k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 00:53:36.631769 7 runners.go:190] proxy-service-szc7k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 00:53:37.632001 7 runners.go:190] proxy-service-szc7k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0520 00:53:38.632229 7 runners.go:190] proxy-service-szc7k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0520 00:53:39.632486 7 runners.go:190] proxy-service-szc7k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0520 00:53:40.632705 7 runners.go:190] proxy-service-szc7k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0520 00:53:41.632924 7 runners.go:190] proxy-service-szc7k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0520 00:53:42.633090 7 runners.go:190] proxy-service-szc7k Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 00:53:42.637: INFO: setup took 8.104626822s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 20 00:53:42.647: INFO: (0) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 9.685689ms) May 20 00:53:42.647: INFO: (0) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 10.05976ms) May 20 00:53:42.647: INFO: (0) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname2/proxy/: bar (200; 10.138943ms) May 20 00:53:42.648: INFO: (0) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 11.114694ms) May 20 00:53:42.650: INFO: (0) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 12.271249ms) May 20 00:53:42.650: INFO: (0) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname2/proxy/: bar (200; 12.666534ms) May 20 00:53:42.650: INFO: (0) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname1/proxy/: foo (200; 12.749932ms) May 20 00:53:42.654: INFO: (0) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:1080/proxy/: ... (200; 16.906772ms) May 20 00:53:42.654: INFO: (0) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:1080/proxy/: test<... (200; 16.747944ms) May 20 00:53:42.654: INFO: (0) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname1/proxy/: foo (200; 16.851284ms) May 20 00:53:42.654: INFO: (0) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9/proxy/: test (200; 17.134996ms) May 20 00:53:42.656: INFO: (0) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname1/proxy/: tls baz (200; 18.846592ms) May 20 00:53:42.656: INFO: (0) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:460/proxy/: tls baz (200; 18.909971ms) May 20 00:53:42.659: INFO: (0) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:462/proxy/: tls qux (200; 21.923886ms) May 20 00:53:42.659: INFO: (0) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname2/proxy/: tls qux (200; 21.839024ms) May 20 00:53:42.660: INFO: (0) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:443/proxy/: test (200; 4.995835ms) May 20 00:53:42.665: INFO: (1) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 4.924858ms) May 20 00:53:42.666: INFO: (1) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 5.215973ms) May 20 00:53:42.666: INFO: (1) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 5.257049ms) May 20 00:53:42.667: INFO: (1) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname1/proxy/: foo (200; 6.332267ms) May 20 00:53:42.667: INFO: (1) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname2/proxy/: bar (200; 6.552501ms) May 20 00:53:42.667: INFO: (1) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname2/proxy/: tls qux (200; 6.752918ms) May 20 00:53:42.667: INFO: (1) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname1/proxy/: tls baz (200; 6.754859ms) May 20 00:53:42.667: INFO: (1) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname1/proxy/: foo (200; 6.86419ms) May 20 00:53:42.667: INFO: (1) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:460/proxy/: tls baz (200; 6.886657ms) May 20 00:53:42.668: INFO: (1) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname2/proxy/: bar (200; 7.484819ms) May 20 00:53:42.668: INFO: (1) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 7.63204ms) May 20 00:53:42.668: INFO: (1) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:1080/proxy/: ... (200; 7.664772ms) May 20 00:53:42.668: INFO: (1) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:1080/proxy/: test<... (200; 7.644389ms) May 20 00:53:42.668: INFO: (1) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:443/proxy/: ... (200; 4.961353ms) May 20 00:53:42.674: INFO: (2) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname1/proxy/: foo (200; 6.168841ms) May 20 00:53:42.675: INFO: (2) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname2/proxy/: bar (200; 6.276847ms) May 20 00:53:42.675: INFO: (2) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname1/proxy/: foo (200; 6.301788ms) May 20 00:53:42.675: INFO: (2) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 6.632838ms) May 20 00:53:42.675: INFO: (2) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9/proxy/: test (200; 7.016099ms) May 20 00:53:42.675: INFO: (2) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:462/proxy/: tls qux (200; 7.039877ms) May 20 00:53:42.675: INFO: (2) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname1/proxy/: tls baz (200; 7.087659ms) May 20 00:53:42.675: INFO: (2) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 6.999592ms) May 20 00:53:42.675: INFO: (2) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname2/proxy/: tls qux (200; 7.118753ms) May 20 00:53:42.675: INFO: (2) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:460/proxy/: tls baz (200; 7.194811ms) May 20 00:53:42.675: INFO: (2) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 7.106755ms) May 20 00:53:42.675: INFO: (2) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname2/proxy/: bar (200; 7.173035ms) May 20 00:53:42.676: INFO: (2) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:1080/proxy/: test<... (200; 7.669328ms) May 20 00:53:42.676: INFO: (2) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:443/proxy/: ... (200; 5.084672ms) May 20 00:53:42.681: INFO: (3) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:460/proxy/: tls baz (200; 5.049139ms) May 20 00:53:42.682: INFO: (3) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9/proxy/: test (200; 5.660437ms) May 20 00:53:42.682: INFO: (3) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname2/proxy/: bar (200; 5.589839ms) May 20 00:53:42.682: INFO: (3) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:1080/proxy/: test<... (200; 6.125358ms) May 20 00:53:42.682: INFO: (3) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 6.167848ms) May 20 00:53:42.682: INFO: (3) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname1/proxy/: foo (200; 6.223742ms) May 20 00:53:42.682: INFO: (3) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 6.224463ms) May 20 00:53:42.682: INFO: (3) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:462/proxy/: tls qux (200; 6.239813ms) May 20 00:53:42.682: INFO: (3) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname2/proxy/: bar (200; 6.250049ms) May 20 00:53:42.683: INFO: (3) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 6.341082ms) May 20 00:53:42.683: INFO: (3) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:443/proxy/: test<... (200; 2.879238ms) May 20 00:53:42.686: INFO: (4) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:460/proxy/: tls baz (200; 3.094381ms) May 20 00:53:42.689: INFO: (4) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 5.996397ms) May 20 00:53:42.689: INFO: (4) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 6.442347ms) May 20 00:53:42.689: INFO: (4) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9/proxy/: test (200; 6.428478ms) May 20 00:53:42.690: INFO: (4) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:443/proxy/: ... (200; 6.62874ms) May 20 00:53:42.690: INFO: (4) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 6.724353ms) May 20 00:53:42.690: INFO: (4) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname2/proxy/: tls qux (200; 6.77641ms) May 20 00:53:42.690: INFO: (4) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 6.825413ms) May 20 00:53:42.690: INFO: (4) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:462/proxy/: tls qux (200; 6.80145ms) May 20 00:53:42.690: INFO: (4) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname1/proxy/: foo (200; 6.875766ms) May 20 00:53:42.691: INFO: (4) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname2/proxy/: bar (200; 8.129704ms) May 20 00:53:42.691: INFO: (4) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname1/proxy/: foo (200; 8.149725ms) May 20 00:53:42.696: INFO: (5) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:1080/proxy/: test<... (200; 4.747012ms) May 20 00:53:42.696: INFO: (5) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname2/proxy/: bar (200; 4.689964ms) May 20 00:53:42.696: INFO: (5) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 4.691111ms) May 20 00:53:42.696: INFO: (5) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 4.803008ms) May 20 00:53:42.696: INFO: (5) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:462/proxy/: tls qux (200; 5.051036ms) May 20 00:53:42.696: INFO: (5) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:443/proxy/: test (200; 6.001883ms) May 20 00:53:42.697: INFO: (5) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname2/proxy/: bar (200; 6.086834ms) May 20 00:53:42.697: INFO: (5) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname1/proxy/: tls baz (200; 6.042789ms) May 20 00:53:42.698: INFO: (5) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname1/proxy/: foo (200; 6.16626ms) May 20 00:53:42.698: INFO: (5) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname1/proxy/: foo (200; 6.381162ms) May 20 00:53:42.698: INFO: (5) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname2/proxy/: tls qux (200; 6.37335ms) May 20 00:53:42.698: INFO: (5) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:1080/proxy/: ... (200; 6.351661ms) May 20 00:53:42.704: INFO: (6) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 6.12988ms) May 20 00:53:42.704: INFO: (6) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname2/proxy/: bar (200; 6.003085ms) May 20 00:53:42.704: INFO: (6) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname1/proxy/: foo (200; 6.486498ms) May 20 00:53:42.704: INFO: (6) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 6.678904ms) May 20 00:53:42.705: INFO: (6) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:1080/proxy/: test<... (200; 6.556027ms) May 20 00:53:42.705: INFO: (6) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname1/proxy/: foo (200; 6.715356ms) May 20 00:53:42.705: INFO: (6) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:443/proxy/: ... (200; 6.995258ms) May 20 00:53:42.705: INFO: (6) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9/proxy/: test (200; 7.086327ms) May 20 00:53:42.705: INFO: (6) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname2/proxy/: tls qux (200; 7.200967ms) May 20 00:53:42.705: INFO: (6) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:462/proxy/: tls qux (200; 7.212703ms) May 20 00:53:42.708: INFO: (7) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 2.662195ms) May 20 00:53:42.708: INFO: (7) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 2.680169ms) May 20 00:53:42.708: INFO: (7) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:460/proxy/: tls baz (200; 3.261704ms) May 20 00:53:42.708: INFO: (7) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9/proxy/: test (200; 3.366159ms) May 20 00:53:42.710: INFO: (7) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname2/proxy/: bar (200; 4.435654ms) May 20 00:53:42.710: INFO: (7) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname1/proxy/: tls baz (200; 4.841102ms) May 20 00:53:42.710: INFO: (7) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname2/proxy/: bar (200; 4.850982ms) May 20 00:53:42.710: INFO: (7) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname1/proxy/: foo (200; 4.959238ms) May 20 00:53:42.710: INFO: (7) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname1/proxy/: foo (200; 4.954087ms) May 20 00:53:42.710: INFO: (7) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 4.862797ms) May 20 00:53:42.710: INFO: (7) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 5.239384ms) May 20 00:53:42.710: INFO: (7) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname2/proxy/: tls qux (200; 5.415365ms) May 20 00:53:42.710: INFO: (7) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:462/proxy/: tls qux (200; 5.350194ms) May 20 00:53:42.711: INFO: (7) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:1080/proxy/: ... (200; 5.307442ms) May 20 00:53:42.711: INFO: (7) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:443/proxy/: test<... (200; 5.473589ms) May 20 00:53:42.713: INFO: (8) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:462/proxy/: tls qux (200; 2.749551ms) May 20 00:53:42.714: INFO: (8) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:443/proxy/: ... (200; 4.427414ms) May 20 00:53:42.715: INFO: (8) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 4.51169ms) May 20 00:53:42.715: INFO: (8) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:460/proxy/: tls baz (200; 4.456531ms) May 20 00:53:42.715: INFO: (8) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 4.506943ms) May 20 00:53:42.716: INFO: (8) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9/proxy/: test (200; 4.86663ms) May 20 00:53:42.716: INFO: (8) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:1080/proxy/: test<... (200; 4.868794ms) May 20 00:53:42.716: INFO: (8) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 4.933281ms) May 20 00:53:42.717: INFO: (8) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname2/proxy/: bar (200; 6.308498ms) May 20 00:53:42.717: INFO: (8) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname2/proxy/: tls qux (200; 6.606632ms) May 20 00:53:42.717: INFO: (8) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname1/proxy/: tls baz (200; 6.715281ms) May 20 00:53:42.718: INFO: (8) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname2/proxy/: bar (200; 6.690516ms) May 20 00:53:42.718: INFO: (8) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname1/proxy/: foo (200; 6.686114ms) May 20 00:53:42.718: INFO: (8) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname1/proxy/: foo (200; 7.351347ms) May 20 00:53:42.721: INFO: (9) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:460/proxy/: tls baz (200; 2.748074ms) May 20 00:53:42.721: INFO: (9) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:443/proxy/: test (200; 3.58386ms) May 20 00:53:42.722: INFO: (9) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 3.685797ms) May 20 00:53:42.722: INFO: (9) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:1080/proxy/: ... (200; 3.713905ms) May 20 00:53:42.722: INFO: (9) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 4.041191ms) May 20 00:53:42.722: INFO: (9) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:1080/proxy/: test<... (200; 4.08707ms) May 20 00:53:42.722: INFO: (9) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 4.06443ms) May 20 00:53:42.722: INFO: (9) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:462/proxy/: tls qux (200; 4.086411ms) May 20 00:53:42.722: INFO: (9) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 4.084795ms) May 20 00:53:42.723: INFO: (9) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname1/proxy/: foo (200; 5.006909ms) May 20 00:53:42.723: INFO: (9) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname1/proxy/: foo (200; 5.165765ms) May 20 00:53:42.723: INFO: (9) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname2/proxy/: bar (200; 5.245947ms) May 20 00:53:42.723: INFO: (9) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname2/proxy/: tls qux (200; 5.253787ms) May 20 00:53:42.723: INFO: (9) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname2/proxy/: bar (200; 5.371718ms) May 20 00:53:42.724: INFO: (9) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname1/proxy/: tls baz (200; 5.550781ms) May 20 00:53:42.728: INFO: (10) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9/proxy/: test (200; 3.768159ms) May 20 00:53:42.728: INFO: (10) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname2/proxy/: bar (200; 4.310577ms) May 20 00:53:42.728: INFO: (10) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 4.229485ms) May 20 00:53:42.728: INFO: (10) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname1/proxy/: foo (200; 4.362142ms) May 20 00:53:42.728: INFO: (10) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname2/proxy/: bar (200; 4.516624ms) May 20 00:53:42.729: INFO: (10) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname1/proxy/: tls baz (200; 5.210482ms) May 20 00:53:42.729: INFO: (10) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:1080/proxy/: ... (200; 5.239162ms) May 20 00:53:42.729: INFO: (10) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:462/proxy/: tls qux (200; 5.288084ms) May 20 00:53:42.729: INFO: (10) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:443/proxy/: test<... (200; 5.212599ms) May 20 00:53:42.729: INFO: (10) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 5.336946ms) May 20 00:53:42.730: INFO: (10) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:460/proxy/: tls baz (200; 5.723985ms) May 20 00:53:42.730: INFO: (10) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 5.907825ms) May 20 00:53:42.737: INFO: (11) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname2/proxy/: bar (200; 7.412763ms) May 20 00:53:42.737: INFO: (11) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9/proxy/: test (200; 7.318552ms) May 20 00:53:42.737: INFO: (11) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:1080/proxy/: test<... (200; 7.33798ms) May 20 00:53:42.737: INFO: (11) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname1/proxy/: tls baz (200; 7.374151ms) May 20 00:53:42.737: INFO: (11) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:1080/proxy/: ... (200; 7.411292ms) May 20 00:53:42.737: INFO: (11) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:460/proxy/: tls baz (200; 7.47869ms) May 20 00:53:42.737: INFO: (11) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 7.432003ms) May 20 00:53:42.737: INFO: (11) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 7.403599ms) May 20 00:53:42.737: INFO: (11) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname2/proxy/: tls qux (200; 7.483901ms) May 20 00:53:42.737: INFO: (11) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:443/proxy/: ... (200; 4.172457ms) May 20 00:53:42.742: INFO: (12) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname2/proxy/: bar (200; 4.205285ms) May 20 00:53:42.742: INFO: (12) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname1/proxy/: foo (200; 4.265391ms) May 20 00:53:42.742: INFO: (12) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname2/proxy/: tls qux (200; 4.235531ms) May 20 00:53:42.742: INFO: (12) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:1080/proxy/: test<... (200; 4.200072ms) May 20 00:53:42.742: INFO: (12) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:443/proxy/: test (200; 5.278507ms) May 20 00:53:42.743: INFO: (12) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname1/proxy/: foo (200; 5.329435ms) May 20 00:53:42.743: INFO: (12) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname1/proxy/: tls baz (200; 5.386081ms) May 20 00:53:42.743: INFO: (12) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:462/proxy/: tls qux (200; 5.421027ms) May 20 00:53:42.743: INFO: (12) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname2/proxy/: bar (200; 5.474663ms) May 20 00:53:42.747: INFO: (13) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 3.596565ms) May 20 00:53:42.748: INFO: (13) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname1/proxy/: tls baz (200; 4.707498ms) May 20 00:53:42.748: INFO: (13) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 4.723083ms) May 20 00:53:42.748: INFO: (13) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname2/proxy/: bar (200; 4.709102ms) May 20 00:53:42.748: INFO: (13) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:1080/proxy/: test<... (200; 4.72681ms) May 20 00:53:42.748: INFO: (13) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname2/proxy/: bar (200; 4.959282ms) May 20 00:53:42.748: INFO: (13) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname2/proxy/: tls qux (200; 4.899555ms) May 20 00:53:42.748: INFO: (13) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname1/proxy/: foo (200; 4.911857ms) May 20 00:53:42.748: INFO: (13) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:462/proxy/: tls qux (200; 4.933384ms) May 20 00:53:42.748: INFO: (13) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9/proxy/: test (200; 5.20488ms) May 20 00:53:42.749: INFO: (13) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 5.801219ms) May 20 00:53:42.749: INFO: (13) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 5.850809ms) May 20 00:53:42.749: INFO: (13) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:460/proxy/: tls baz (200; 5.916856ms) May 20 00:53:42.749: INFO: (13) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:443/proxy/: ... (200; 5.918359ms) May 20 00:53:42.749: INFO: (13) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname1/proxy/: foo (200; 6.005148ms) May 20 00:53:42.752: INFO: (14) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 2.496279ms) May 20 00:53:42.752: INFO: (14) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:460/proxy/: tls baz (200; 2.74995ms) May 20 00:53:42.752: INFO: (14) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 2.929191ms) May 20 00:53:42.752: INFO: (14) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:462/proxy/: tls qux (200; 3.025745ms) May 20 00:53:42.754: INFO: (14) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname1/proxy/: foo (200; 5.2028ms) May 20 00:53:42.754: INFO: (14) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 5.172408ms) May 20 00:53:42.755: INFO: (14) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname2/proxy/: tls qux (200; 5.277012ms) May 20 00:53:42.755: INFO: (14) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:443/proxy/: test<... (200; 6.138844ms) May 20 00:53:42.756: INFO: (14) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9/proxy/: test (200; 6.147826ms) May 20 00:53:42.756: INFO: (14) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname2/proxy/: bar (200; 6.407297ms) May 20 00:53:42.756: INFO: (14) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname1/proxy/: tls baz (200; 6.477084ms) May 20 00:53:42.756: INFO: (14) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:1080/proxy/: ... (200; 6.508719ms) May 20 00:53:42.758: INFO: (15) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:462/proxy/: tls qux (200; 2.581337ms) May 20 00:53:42.758: INFO: (15) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 2.618772ms) May 20 00:53:42.758: INFO: (15) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:1080/proxy/: ... (200; 2.745648ms) May 20 00:53:42.759: INFO: (15) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 2.667013ms) May 20 00:53:42.759: INFO: (15) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9/proxy/: test (200; 3.227311ms) May 20 00:53:42.759: INFO: (15) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:1080/proxy/: test<... (200; 3.499463ms) May 20 00:53:42.760: INFO: (15) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:443/proxy/: ... (200; 3.130157ms) May 20 00:53:42.764: INFO: (16) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:1080/proxy/: test<... (200; 3.310608ms) May 20 00:53:42.764: INFO: (16) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:462/proxy/: tls qux (200; 3.364655ms) May 20 00:53:42.764: INFO: (16) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:443/proxy/: test (200; 3.642857ms) May 20 00:53:42.764: INFO: (16) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 3.70453ms) May 20 00:53:42.765: INFO: (16) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 3.781817ms) May 20 00:53:42.765: INFO: (16) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:460/proxy/: tls baz (200; 3.854225ms) May 20 00:53:42.765: INFO: (16) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 4.582678ms) May 20 00:53:42.766: INFO: (16) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname1/proxy/: foo (200; 4.865608ms) May 20 00:53:42.766: INFO: (16) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname1/proxy/: foo (200; 4.902371ms) May 20 00:53:42.766: INFO: (16) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname2/proxy/: bar (200; 4.97015ms) May 20 00:53:42.766: INFO: (16) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname1/proxy/: tls baz (200; 4.937308ms) May 20 00:53:42.766: INFO: (16) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname2/proxy/: tls qux (200; 5.0193ms) May 20 00:53:42.766: INFO: (16) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname2/proxy/: bar (200; 4.992653ms) May 20 00:53:42.771: INFO: (17) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname2/proxy/: bar (200; 5.235347ms) May 20 00:53:42.771: INFO: (17) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname1/proxy/: foo (200; 5.3641ms) May 20 00:53:42.771: INFO: (17) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname1/proxy/: foo (200; 5.440553ms) May 20 00:53:42.771: INFO: (17) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname1/proxy/: tls baz (200; 5.482713ms) May 20 00:53:42.771: INFO: (17) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname2/proxy/: tls qux (200; 5.438287ms) May 20 00:53:42.771: INFO: (17) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 5.491422ms) May 20 00:53:42.772: INFO: (17) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 5.808686ms) May 20 00:53:42.772: INFO: (17) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 5.917471ms) May 20 00:53:42.772: INFO: (17) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 5.962646ms) May 20 00:53:42.772: INFO: (17) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:443/proxy/: test (200; 6.291644ms) May 20 00:53:42.772: INFO: (17) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname2/proxy/: bar (200; 6.392861ms) May 20 00:53:42.772: INFO: (17) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:1080/proxy/: ... (200; 6.450384ms) May 20 00:53:42.772: INFO: (17) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:1080/proxy/: test<... (200; 6.573191ms) May 20 00:53:42.773: INFO: (17) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:462/proxy/: tls qux (200; 6.95933ms) May 20 00:53:42.777: INFO: (18) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:460/proxy/: tls baz (200; 4.454301ms) May 20 00:53:42.777: INFO: (18) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:462/proxy/: tls qux (200; 4.390545ms) May 20 00:53:42.778: INFO: (18) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname1/proxy/: foo (200; 5.507448ms) May 20 00:53:42.778: INFO: (18) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:1080/proxy/: ... (200; 5.523477ms) May 20 00:53:42.779: INFO: (18) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 5.95342ms) May 20 00:53:42.779: INFO: (18) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 5.923262ms) May 20 00:53:42.779: INFO: (18) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:1080/proxy/: test<... (200; 5.947177ms) May 20 00:53:42.779: INFO: (18) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname1/proxy/: foo (200; 6.038181ms) May 20 00:53:42.779: INFO: (18) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:443/proxy/: test (200; 6.164455ms) May 20 00:53:42.779: INFO: (18) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 6.251519ms) May 20 00:53:42.779: INFO: (18) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname2/proxy/: bar (200; 6.211852ms) May 20 00:53:42.779: INFO: (18) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname2/proxy/: tls qux (200; 6.210514ms) May 20 00:53:42.779: INFO: (18) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname2/proxy/: bar (200; 6.18832ms) May 20 00:53:42.780: INFO: (18) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname1/proxy/: tls baz (200; 6.639051ms) May 20 00:53:42.786: INFO: (19) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:1080/proxy/: test<... (200; 6.450422ms) May 20 00:53:42.786: INFO: (19) /api/v1/namespaces/proxy-5370/pods/http:proxy-service-szc7k-8cxh9:160/proxy/: foo (200; 6.509933ms) May 20 00:53:42.786: INFO: (19) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9/proxy/: test (200; 6.591328ms) May 20 00:53:42.787: INFO: (19) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname1/proxy/: foo (200; 6.922647ms) May 20 00:53:42.787: INFO: (19) /api/v1/namespaces/proxy-5370/services/http:proxy-service-szc7k:portname2/proxy/: bar (200; 6.881915ms) May 20 00:53:42.787: INFO: (19) /api/v1/namespaces/proxy-5370/services/https:proxy-service-szc7k:tlsportname2/proxy/: tls qux (200; 6.988802ms) May 20 00:53:42.787: INFO: (19) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:443/proxy/: ... (200; 7.120979ms) May 20 00:53:42.787: INFO: (19) /api/v1/namespaces/proxy-5370/pods/https:proxy-service-szc7k-8cxh9:460/proxy/: tls baz (200; 7.105946ms) May 20 00:53:42.787: INFO: (19) /api/v1/namespaces/proxy-5370/services/proxy-service-szc7k:portname2/proxy/: bar (200; 7.199179ms) May 20 00:53:42.787: INFO: (19) /api/v1/namespaces/proxy-5370/pods/proxy-service-szc7k-8cxh9:162/proxy/: bar (200; 7.452321ms) STEP: deleting ReplicationController proxy-service-szc7k in namespace proxy-5370, will wait for the garbage collector to delete the pods May 20 00:53:42.846: INFO: Deleting ReplicationController proxy-service-szc7k took: 7.036738ms May 20 00:53:43.147: INFO: Terminating ReplicationController proxy-service-szc7k pods took: 300.259231ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:53:45.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5370" for this suite. • [SLOW TEST:11.019 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":288,"completed":225,"skipped":3788,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:53:45.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-1b9ca828-bd04-4e30-86d2-0ab653b8e22c STEP: Creating a pod to test consume secrets May 20 00:53:45.559: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d1672e7e-b96d-4922-93bc-ec93d7570e46" in namespace "projected-1432" to be "Succeeded or Failed" May 20 00:53:45.602: INFO: Pod "pod-projected-secrets-d1672e7e-b96d-4922-93bc-ec93d7570e46": Phase="Pending", Reason="", readiness=false. Elapsed: 42.767129ms May 20 00:53:47.605: INFO: Pod "pod-projected-secrets-d1672e7e-b96d-4922-93bc-ec93d7570e46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046586529s May 20 00:53:49.614: INFO: Pod "pod-projected-secrets-d1672e7e-b96d-4922-93bc-ec93d7570e46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055191115s STEP: Saw pod success May 20 00:53:49.614: INFO: Pod "pod-projected-secrets-d1672e7e-b96d-4922-93bc-ec93d7570e46" satisfied condition "Succeeded or Failed" May 20 00:53:49.618: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-d1672e7e-b96d-4922-93bc-ec93d7570e46 container projected-secret-volume-test: STEP: delete the pod May 20 00:53:49.698: INFO: Waiting for pod pod-projected-secrets-d1672e7e-b96d-4922-93bc-ec93d7570e46 to disappear May 20 00:53:49.706: INFO: Pod pod-projected-secrets-d1672e7e-b96d-4922-93bc-ec93d7570e46 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:53:49.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1432" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":226,"skipped":3810,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:53:49.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-147.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-147.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-147.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-147.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 20 00:53:55.914: INFO: DNS probes using dns-test-7662575e-d83c-4b46-a8cd-93cf91d4ca49 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-147.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-147.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-147.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-147.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 20 00:54:02.105: INFO: DNS probes using dns-test-245f74aa-5784-4412-afb6-0beb0997253d succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-147.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-147.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-147.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-147.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 20 00:54:10.493: INFO: DNS probes using dns-test-989cb0d4-c8e5-4a6e-a2a9-5d51d902cf5f succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:54:10.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-147" for this suite. • [SLOW TEST:20.905 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":288,"completed":227,"skipped":3830,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:54:10.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 20 00:54:11.084: INFO: >>> kubeConfig: /root/.kube/config May 20 00:54:14.069: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:54:24.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2385" for this suite. • [SLOW TEST:14.194 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":288,"completed":228,"skipped":3836,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:54:24.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:54:24.879: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:54:25.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4703" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":288,"completed":229,"skipped":3860,"failed":0} S ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:54:25.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-1800 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1800 to expose endpoints map[] May 20 00:54:25.678: INFO: Get endpoints failed (15.216687ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 20 00:54:26.681: INFO: successfully validated that service multi-endpoint-test in namespace services-1800 exposes endpoints map[] (1.018048831s elapsed) STEP: Creating pod pod1 in namespace services-1800 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1800 to expose endpoints map[pod1:[100]] May 20 00:54:29.818: INFO: successfully validated that service multi-endpoint-test in namespace services-1800 exposes endpoints map[pod1:[100]] (3.130779522s elapsed) STEP: Creating pod pod2 in namespace services-1800 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1800 to expose endpoints map[pod1:[100] pod2:[101]] May 20 00:54:34.099: INFO: successfully validated that service multi-endpoint-test in namespace services-1800 exposes endpoints map[pod1:[100] pod2:[101]] (4.273695499s elapsed) STEP: Deleting pod pod1 in namespace services-1800 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1800 to expose endpoints map[pod2:[101]] May 20 00:54:35.173: INFO: successfully validated that service multi-endpoint-test in namespace services-1800 exposes endpoints map[pod2:[101]] (1.068829713s elapsed) STEP: Deleting pod pod2 in namespace services-1800 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1800 to expose endpoints map[] May 20 00:54:36.187: INFO: successfully validated that service multi-endpoint-test in namespace services-1800 exposes endpoints map[] (1.009051675s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:54:36.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1800" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:10.733 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":288,"completed":230,"skipped":3861,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:54:36.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 20 00:54:36.624: INFO: Waiting up to 5m0s for pod "downwardapi-volume-886ecbcf-e23d-4e76-b86f-c05d7523e930" in namespace "downward-api-3034" to be "Succeeded or Failed" May 20 00:54:36.647: INFO: Pod "downwardapi-volume-886ecbcf-e23d-4e76-b86f-c05d7523e930": Phase="Pending", Reason="", readiness=false. Elapsed: 22.546584ms May 20 00:54:38.710: INFO: Pod "downwardapi-volume-886ecbcf-e23d-4e76-b86f-c05d7523e930": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085582709s May 20 00:54:40.714: INFO: Pod "downwardapi-volume-886ecbcf-e23d-4e76-b86f-c05d7523e930": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089490958s STEP: Saw pod success May 20 00:54:40.714: INFO: Pod "downwardapi-volume-886ecbcf-e23d-4e76-b86f-c05d7523e930" satisfied condition "Succeeded or Failed" May 20 00:54:40.717: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-886ecbcf-e23d-4e76-b86f-c05d7523e930 container client-container: STEP: delete the pod May 20 00:54:40.927: INFO: Waiting for pod downwardapi-volume-886ecbcf-e23d-4e76-b86f-c05d7523e930 to disappear May 20 00:54:40.966: INFO: Pod downwardapi-volume-886ecbcf-e23d-4e76-b86f-c05d7523e930 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:54:40.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3034" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":231,"skipped":3867,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:54:40.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8681.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8681.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 20 00:54:47.276: INFO: DNS probes using dns-8681/dns-test-918ffefd-2040-4499-9c82-aaf198d771bd succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:54:47.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8681" for this suite. • [SLOW TEST:6.365 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":288,"completed":232,"skipped":3907,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:54:47.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:54:47.422: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:54:51.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4889" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":288,"completed":233,"skipped":3925,"failed":0} ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:54:51.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:54:51.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7973" for this suite. STEP: Destroying namespace "nspatchtest-c27e6d5c-ddb5-4bd1-b9c3-0c91ba88de1d-7117" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":288,"completed":234,"skipped":3925,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:54:51.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-2eb92b4d-de96-4008-8d18-f1161cbec0cc in namespace container-probe-1258 May 20 00:54:56.010: INFO: Started pod liveness-2eb92b4d-de96-4008-8d18-f1161cbec0cc in namespace container-probe-1258 STEP: checking the pod's current state and verifying that restartCount is present May 20 00:54:56.012: INFO: Initial restart count of pod liveness-2eb92b4d-de96-4008-8d18-f1161cbec0cc is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:58:56.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1258" for this suite. • [SLOW TEST:244.902 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":288,"completed":235,"skipped":3945,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:58:56.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 20 00:58:56.870: INFO: Waiting up to 5m0s for pod "pod-f9ad7882-7f64-4e59-806e-dad6dd167333" in namespace "emptydir-3667" to be "Succeeded or Failed" May 20 00:58:57.161: INFO: Pod "pod-f9ad7882-7f64-4e59-806e-dad6dd167333": Phase="Pending", Reason="", readiness=false. Elapsed: 291.005837ms May 20 00:58:59.165: INFO: Pod "pod-f9ad7882-7f64-4e59-806e-dad6dd167333": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294703606s May 20 00:59:01.169: INFO: Pod "pod-f9ad7882-7f64-4e59-806e-dad6dd167333": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.299206298s STEP: Saw pod success May 20 00:59:01.169: INFO: Pod "pod-f9ad7882-7f64-4e59-806e-dad6dd167333" satisfied condition "Succeeded or Failed" May 20 00:59:01.172: INFO: Trying to get logs from node latest-worker2 pod pod-f9ad7882-7f64-4e59-806e-dad6dd167333 container test-container: STEP: delete the pod May 20 00:59:01.227: INFO: Waiting for pod pod-f9ad7882-7f64-4e59-806e-dad6dd167333 to disappear May 20 00:59:01.239: INFO: Pod pod-f9ad7882-7f64-4e59-806e-dad6dd167333 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:59:01.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3667" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":236,"skipped":3962,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:59:01.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 20 00:59:01.344: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc3f6a2b-bce3-41c4-93a3-4d5db0118c75" in namespace "projected-2059" to be "Succeeded or Failed" May 20 00:59:01.360: INFO: Pod "downwardapi-volume-bc3f6a2b-bce3-41c4-93a3-4d5db0118c75": Phase="Pending", Reason="", readiness=false. Elapsed: 15.984228ms May 20 00:59:03.365: INFO: Pod "downwardapi-volume-bc3f6a2b-bce3-41c4-93a3-4d5db0118c75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02097589s May 20 00:59:05.369: INFO: Pod "downwardapi-volume-bc3f6a2b-bce3-41c4-93a3-4d5db0118c75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025520856s STEP: Saw pod success May 20 00:59:05.369: INFO: Pod "downwardapi-volume-bc3f6a2b-bce3-41c4-93a3-4d5db0118c75" satisfied condition "Succeeded or Failed" May 20 00:59:05.372: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-bc3f6a2b-bce3-41c4-93a3-4d5db0118c75 container client-container: STEP: delete the pod May 20 00:59:05.439: INFO: Waiting for pod downwardapi-volume-bc3f6a2b-bce3-41c4-93a3-4d5db0118c75 to disappear May 20 00:59:05.446: INFO: Pod downwardapi-volume-bc3f6a2b-bce3-41c4-93a3-4d5db0118c75 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:59:05.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2059" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":237,"skipped":4001,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:59:05.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:59:05.530: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 20 00:59:10.533: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 20 00:59:10.533: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 20 00:59:10.608: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5694 /apis/apps/v1/namespaces/deployment-5694/deployments/test-cleanup-deployment b4f797eb-b4f1-4e90-b617-eba6a1ff1679 6100425 1 2020-05-20 00:59:10 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-05-20 00:59:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005f96858 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 20 00:59:10.684: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694 deployment-5694 /apis/apps/v1/namespaces/deployment-5694/replicasets/test-cleanup-deployment-6688745694 213793b7-996f-43eb-9688-583431b59e22 6100435 1 2020-05-20 00:59:10 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment b4f797eb-b4f1-4e90-b617-eba6a1ff1679 0xc005f96d27 0xc005f96d28}] [] [{kube-controller-manager Update apps/v1 2020-05-20 00:59:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b4f797eb-b4f1-4e90-b617-eba6a1ff1679\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005f96db8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 20 00:59:10.684: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 20 00:59:10.684: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-5694 /apis/apps/v1/namespaces/deployment-5694/replicasets/test-cleanup-controller bbc27514-a290-4fa3-9d50-2e86479a9ea4 6100428 1 2020-05-20 00:59:05 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment b4f797eb-b4f1-4e90-b617-eba6a1ff1679 0xc005f96c0f 0xc005f96c20}] [] [{e2e.test Update apps/v1 2020-05-20 00:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-20 00:59:10 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"b4f797eb-b4f1-4e90-b617-eba6a1ff1679\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005f96cb8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 20 00:59:10.716: INFO: Pod "test-cleanup-controller-hbw64" is available: &Pod{ObjectMeta:{test-cleanup-controller-hbw64 test-cleanup-controller- deployment-5694 /api/v1/namespaces/deployment-5694/pods/test-cleanup-controller-hbw64 d0cf3643-7ced-4ea7-906d-78a5d18b8447 6100410 0 2020-05-20 00:59:05 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller bbc27514-a290-4fa3-9d50-2e86479a9ea4 0xc005e67557 0xc005e67558}] [] [{kube-controller-manager Update v1 2020-05-20 00:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bbc27514-a290-4fa3-9d50-2e86479a9ea4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 00:59:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.240\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z94ds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z94ds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z94ds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:59:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:59:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:59:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:59:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.240,StartTime:2020-05-20 00:59:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 00:59:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0cfc06caf9fb4f69d716ee6cd27d34a2c9a1ddcdd90aa13714f17325e4cf3fa6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.240,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:59:10.717: INFO: Pod "test-cleanup-deployment-6688745694-62k2n" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-6688745694-62k2n test-cleanup-deployment-6688745694- deployment-5694 /api/v1/namespaces/deployment-5694/pods/test-cleanup-deployment-6688745694-62k2n 28090196-d5d8-4b60-8abc-9f4fa3be2526 6100434 0 2020-05-20 00:59:10 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 213793b7-996f-43eb-9688-583431b59e22 0xc005e67717 0xc005e67718}] [] [{kube-controller-manager Update v1 2020-05-20 00:59:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"213793b7-996f-43eb-9688-583431b59e22\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z94ds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z94ds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z94ds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 00:59:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:59:10.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5694" for this suite. • [SLOW TEST:5.298 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":288,"completed":238,"skipped":4011,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:59:10.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 00:59:10.888: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-f1662ea5-359f-4255-83ae-356123f0f31c" in namespace "security-context-test-3965" to be "Succeeded or Failed" May 20 00:59:10.893: INFO: Pod "alpine-nnp-false-f1662ea5-359f-4255-83ae-356123f0f31c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.414606ms May 20 00:59:12.927: INFO: Pod "alpine-nnp-false-f1662ea5-359f-4255-83ae-356123f0f31c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03916206s May 20 00:59:14.949: INFO: Pod "alpine-nnp-false-f1662ea5-359f-4255-83ae-356123f0f31c": Phase="Running", Reason="", readiness=true. Elapsed: 4.061371324s May 20 00:59:16.954: INFO: Pod "alpine-nnp-false-f1662ea5-359f-4255-83ae-356123f0f31c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065677974s May 20 00:59:16.954: INFO: Pod "alpine-nnp-false-f1662ea5-359f-4255-83ae-356123f0f31c" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:59:16.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3965" for this suite. • [SLOW TEST:6.252 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":239,"skipped":4032,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:59:17.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 00:59:17.567: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 00:59:19.578: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725533157, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725533157, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725533157, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725533157, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 00:59:21.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725533157, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725533157, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725533157, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725533157, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 00:59:24.623: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:59:24.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9213" for this suite. STEP: Destroying namespace "webhook-9213-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.761 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":288,"completed":240,"skipped":4055,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:59:24.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition May 20 00:59:24.900: INFO: Waiting up to 5m0s for pod "var-expansion-fb16da07-f164-48e4-99f9-21087d58fd3d" in namespace "var-expansion-4217" to be "Succeeded or Failed" May 20 00:59:24.911: INFO: Pod "var-expansion-fb16da07-f164-48e4-99f9-21087d58fd3d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.562382ms May 20 00:59:26.915: INFO: Pod "var-expansion-fb16da07-f164-48e4-99f9-21087d58fd3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014825216s May 20 00:59:28.919: INFO: Pod "var-expansion-fb16da07-f164-48e4-99f9-21087d58fd3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019032215s STEP: Saw pod success May 20 00:59:28.919: INFO: Pod "var-expansion-fb16da07-f164-48e4-99f9-21087d58fd3d" satisfied condition "Succeeded or Failed" May 20 00:59:28.922: INFO: Trying to get logs from node latest-worker2 pod var-expansion-fb16da07-f164-48e4-99f9-21087d58fd3d container dapi-container: STEP: delete the pod May 20 00:59:28.984: INFO: Waiting for pod var-expansion-fb16da07-f164-48e4-99f9-21087d58fd3d to disappear May 20 00:59:28.992: INFO: Pod var-expansion-fb16da07-f164-48e4-99f9-21087d58fd3d no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:59:28.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4217" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":288,"completed":241,"skipped":4056,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:59:29.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 20 00:59:29.060: INFO: Created pod &Pod{ObjectMeta:{dns-4536 dns-4536 /api/v1/namespaces/dns-4536/pods/dns-4536 4d7c211b-a9fd-41b9-9a60-041c5fe9795b 6100612 0 2020-05-20 00:59:29 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-20 00:59:29 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ln98b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ln98b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ln98b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 00:59:29.064: INFO: The status of Pod dns-4536 is Pending, waiting for it to be Running (with Ready = true) May 20 00:59:31.067: INFO: The status of Pod dns-4536 is Pending, waiting for it to be Running (with Ready = true) May 20 00:59:33.067: INFO: The status of Pod dns-4536 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 20 00:59:33.067: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4536 PodName:dns-4536 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 00:59:33.067: INFO: >>> kubeConfig: /root/.kube/config I0520 00:59:33.101721 7 log.go:172] (0xc000f08840) (0xc000798460) Create stream I0520 00:59:33.101757 7 log.go:172] (0xc000f08840) (0xc000798460) Stream added, broadcasting: 1 I0520 00:59:33.104165 7 log.go:172] (0xc000f08840) Reply frame received for 1 I0520 00:59:33.104210 7 log.go:172] (0xc000f08840) (0xc002b13f40) Create stream I0520 00:59:33.104227 7 log.go:172] (0xc000f08840) (0xc002b13f40) Stream added, broadcasting: 3 I0520 00:59:33.105293 7 log.go:172] (0xc000f08840) Reply frame received for 3 I0520 00:59:33.105353 7 log.go:172] (0xc000f08840) (0xc0006b10e0) Create stream I0520 00:59:33.105370 7 log.go:172] (0xc000f08840) (0xc0006b10e0) Stream added, broadcasting: 5 I0520 00:59:33.106277 7 log.go:172] (0xc000f08840) Reply frame received for 5 I0520 00:59:33.217012 7 log.go:172] (0xc000f08840) Data frame received for 3 I0520 00:59:33.217053 7 log.go:172] (0xc002b13f40) (3) Data frame handling I0520 00:59:33.217079 7 log.go:172] (0xc002b13f40) (3) Data frame sent I0520 00:59:33.218148 7 log.go:172] (0xc000f08840) Data frame received for 5 I0520 00:59:33.218173 7 log.go:172] (0xc000f08840) Data frame received for 3 I0520 00:59:33.218206 7 log.go:172] (0xc002b13f40) (3) Data frame handling I0520 00:59:33.218241 7 log.go:172] (0xc0006b10e0) (5) Data frame handling I0520 00:59:33.219967 7 log.go:172] (0xc000f08840) Data frame received for 1 I0520 00:59:33.219982 7 log.go:172] (0xc000798460) (1) Data frame handling I0520 00:59:33.219997 7 log.go:172] (0xc000798460) (1) Data frame sent I0520 00:59:33.220012 7 log.go:172] (0xc000f08840) (0xc000798460) Stream removed, broadcasting: 1 I0520 00:59:33.220046 7 log.go:172] (0xc000f08840) Go away received I0520 00:59:33.220177 7 log.go:172] (0xc000f08840) (0xc000798460) Stream removed, broadcasting: 1 I0520 00:59:33.220208 7 log.go:172] (0xc000f08840) (0xc002b13f40) Stream removed, broadcasting: 3 I0520 00:59:33.220228 7 log.go:172] (0xc000f08840) (0xc0006b10e0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 20 00:59:33.220: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4536 PodName:dns-4536 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 00:59:33.220: INFO: >>> kubeConfig: /root/.kube/config I0520 00:59:33.247055 7 log.go:172] (0xc0051089a0) (0xc000dc2500) Create stream I0520 00:59:33.247093 7 log.go:172] (0xc0051089a0) (0xc000dc2500) Stream added, broadcasting: 1 I0520 00:59:33.249431 7 log.go:172] (0xc0051089a0) Reply frame received for 1 I0520 00:59:33.249478 7 log.go:172] (0xc0051089a0) (0xc0015b4000) Create stream I0520 00:59:33.249494 7 log.go:172] (0xc0051089a0) (0xc0015b4000) Stream added, broadcasting: 3 I0520 00:59:33.250401 7 log.go:172] (0xc0051089a0) Reply frame received for 3 I0520 00:59:33.250440 7 log.go:172] (0xc0051089a0) (0xc0021a4000) Create stream I0520 00:59:33.250464 7 log.go:172] (0xc0051089a0) (0xc0021a4000) Stream added, broadcasting: 5 I0520 00:59:33.251199 7 log.go:172] (0xc0051089a0) Reply frame received for 5 I0520 00:59:33.332939 7 log.go:172] (0xc0051089a0) Data frame received for 3 I0520 00:59:33.332974 7 log.go:172] (0xc0015b4000) (3) Data frame handling I0520 00:59:33.332993 7 log.go:172] (0xc0015b4000) (3) Data frame sent I0520 00:59:33.335551 7 log.go:172] (0xc0051089a0) Data frame received for 3 I0520 00:59:33.335586 7 log.go:172] (0xc0015b4000) (3) Data frame handling I0520 00:59:33.335811 7 log.go:172] (0xc0051089a0) Data frame received for 5 I0520 00:59:33.335911 7 log.go:172] (0xc0021a4000) (5) Data frame handling I0520 00:59:33.337923 7 log.go:172] (0xc0051089a0) Data frame received for 1 I0520 00:59:33.337955 7 log.go:172] (0xc000dc2500) (1) Data frame handling I0520 00:59:33.337973 7 log.go:172] (0xc000dc2500) (1) Data frame sent I0520 00:59:33.337990 7 log.go:172] (0xc0051089a0) (0xc000dc2500) Stream removed, broadcasting: 1 I0520 00:59:33.338008 7 log.go:172] (0xc0051089a0) Go away received I0520 00:59:33.338239 7 log.go:172] (0xc0051089a0) (0xc000dc2500) Stream removed, broadcasting: 1 I0520 00:59:33.338269 7 log.go:172] (0xc0051089a0) (0xc0015b4000) Stream removed, broadcasting: 3 I0520 00:59:33.338302 7 log.go:172] (0xc0051089a0) (0xc0021a4000) Stream removed, broadcasting: 5 May 20 00:59:33.338: INFO: Deleting pod dns-4536... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:59:33.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4536" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":288,"completed":242,"skipped":4081,"failed":0} ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:59:33.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-8777/configmap-test-fe898d10-f52e-47e4-867e-3991490726d6 STEP: Creating a pod to test consume configMaps May 20 00:59:33.654: INFO: Waiting up to 5m0s for pod "pod-configmaps-daead472-9a42-415e-8a95-8712510ecbee" in namespace "configmap-8777" to be "Succeeded or Failed" May 20 00:59:33.886: INFO: Pod "pod-configmaps-daead472-9a42-415e-8a95-8712510ecbee": Phase="Pending", Reason="", readiness=false. Elapsed: 231.932424ms May 20 00:59:35.892: INFO: Pod "pod-configmaps-daead472-9a42-415e-8a95-8712510ecbee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237054338s May 20 00:59:37.896: INFO: Pod "pod-configmaps-daead472-9a42-415e-8a95-8712510ecbee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.241102324s STEP: Saw pod success May 20 00:59:37.896: INFO: Pod "pod-configmaps-daead472-9a42-415e-8a95-8712510ecbee" satisfied condition "Succeeded or Failed" May 20 00:59:37.898: INFO: Trying to get logs from node latest-worker pod pod-configmaps-daead472-9a42-415e-8a95-8712510ecbee container env-test: STEP: delete the pod May 20 00:59:37.971: INFO: Waiting for pod pod-configmaps-daead472-9a42-415e-8a95-8712510ecbee to disappear May 20 00:59:37.983: INFO: Pod pod-configmaps-daead472-9a42-415e-8a95-8712510ecbee no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:59:37.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8777" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":288,"completed":243,"skipped":4081,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:59:37.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 20 00:59:38.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5208' May 20 00:59:41.052: INFO: stderr: "" May 20 00:59:41.052: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 20 00:59:46.102: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5208 -o json' May 20 00:59:46.198: INFO: stderr: "" May 20 00:59:46.199: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-20T00:59:41Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-20T00:59:41Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.243\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-20T00:59:44Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5208\",\n \"resourceVersion\": \"6100735\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5208/pods/e2e-test-httpd-pod\",\n \"uid\": \"c96fdf0f-ca5b-48b6-905c-124b6227bad9\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-kgwms\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-kgwms\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-kgwms\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-20T00:59:41Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-20T00:59:44Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-20T00:59:44Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-20T00:59:41Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://f3842732d4ef2e97246640f5a8648957ec69b9b634e2cfd0a95de338517ca7da\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-20T00:59:43Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.243\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.243\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-20T00:59:41Z\"\n }\n}\n" STEP: replace the image in the pod May 20 00:59:46.199: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5208' May 20 00:59:46.518: INFO: stderr: "" May 20 00:59:46.518: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 May 20 00:59:46.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5208' May 20 00:59:54.876: INFO: stderr: "" May 20 00:59:54.876: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:59:54.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5208" for this suite. • [SLOW TEST:16.905 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":288,"completed":244,"skipped":4144,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:59:54.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 20 00:59:59.031: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 00:59:59.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2438" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":245,"skipped":4164,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 00:59:59.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 20 01:00:03.198: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:00:03.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8802" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":288,"completed":246,"skipped":4208,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:00:03.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 01:00:03.755: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 01:00:05.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725533203, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725533203, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725533203, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725533203, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 01:00:07.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725533203, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725533203, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725533203, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725533203, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 01:00:10.800: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:00:10.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9115" for this suite. STEP: Destroying namespace "webhook-9115-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.787 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":288,"completed":247,"skipped":4211,"failed":0} SSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:00:11.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-31100028-78fe-42d6-ac0f-94f89f25a015 STEP: Creating secret with name secret-projected-all-test-volume-6daf2c8f-d280-4e5e-9427-aa556ee9c446 STEP: Creating a pod to test Check all projections for projected volume plugin May 20 01:00:11.172: INFO: Waiting up to 5m0s for pod "projected-volume-33ecc890-a318-46e9-81d9-ff612d2feed5" in namespace "projected-7469" to be "Succeeded or Failed" May 20 01:00:11.176: INFO: Pod "projected-volume-33ecc890-a318-46e9-81d9-ff612d2feed5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036581ms May 20 01:00:13.180: INFO: Pod "projected-volume-33ecc890-a318-46e9-81d9-ff612d2feed5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008463755s May 20 01:00:15.185: INFO: Pod "projected-volume-33ecc890-a318-46e9-81d9-ff612d2feed5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01289713s STEP: Saw pod success May 20 01:00:15.185: INFO: Pod "projected-volume-33ecc890-a318-46e9-81d9-ff612d2feed5" satisfied condition "Succeeded or Failed" May 20 01:00:15.188: INFO: Trying to get logs from node latest-worker pod projected-volume-33ecc890-a318-46e9-81d9-ff612d2feed5 container projected-all-volume-test: STEP: delete the pod May 20 01:00:15.235: INFO: Waiting for pod projected-volume-33ecc890-a318-46e9-81d9-ff612d2feed5 to disappear May 20 01:00:15.245: INFO: Pod projected-volume-33ecc890-a318-46e9-81d9-ff612d2feed5 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:00:15.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7469" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":288,"completed":248,"skipped":4216,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:00:15.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 20 01:00:15.337: INFO: Waiting up to 5m0s for pod "pod-67cf71f7-9fa1-46ba-a604-4741c5e50ef1" in namespace "emptydir-1707" to be "Succeeded or Failed" May 20 01:00:15.347: INFO: Pod "pod-67cf71f7-9fa1-46ba-a604-4741c5e50ef1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.272258ms May 20 01:00:17.352: INFO: Pod "pod-67cf71f7-9fa1-46ba-a604-4741c5e50ef1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014546739s May 20 01:00:19.356: INFO: Pod "pod-67cf71f7-9fa1-46ba-a604-4741c5e50ef1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018626564s STEP: Saw pod success May 20 01:00:19.356: INFO: Pod "pod-67cf71f7-9fa1-46ba-a604-4741c5e50ef1" satisfied condition "Succeeded or Failed" May 20 01:00:19.364: INFO: Trying to get logs from node latest-worker pod pod-67cf71f7-9fa1-46ba-a604-4741c5e50ef1 container test-container: STEP: delete the pod May 20 01:00:19.426: INFO: Waiting for pod pod-67cf71f7-9fa1-46ba-a604-4741c5e50ef1 to disappear May 20 01:00:19.431: INFO: Pod pod-67cf71f7-9fa1-46ba-a604-4741c5e50ef1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:00:19.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1707" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":249,"skipped":4235,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:00:19.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:00:19.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3224" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":288,"completed":250,"skipped":4248,"failed":0} SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:00:19.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 20 01:00:19.613: INFO: PodSpec: initContainers in spec.initContainers May 20 01:01:10.488: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-1748c105-7534-47c4-b36e-c2f0a8685a96", GenerateName:"", Namespace:"init-container-1356", SelfLink:"/api/v1/namespaces/init-container-1356/pods/pod-init-1748c105-7534-47c4-b36e-c2f0a8685a96", UID:"43bf48a9-5c90-43a3-8193-e1ec7495b229", ResourceVersion:"6101219", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725533219, loc:(*time.Location)(0x7c342a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"613928764"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004d9c060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004d9c080)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004d9c0a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004d9c0c0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nbdsz", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005f6a000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nbdsz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nbdsz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nbdsz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc005f96098), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001f88000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005f96120)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005f96140)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc005f96148), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc005f9614c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725533219, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725533219, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725533219, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725533219, loc:(*time.Location)(0x7c342a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.1.248", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.248"}}, StartTime:(*v1.Time)(0xc004d9c0e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001f880e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001f88150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://c5718a0ab906cb9d5cb190497c88d348fcc9c8364d86cad857def2d818372035", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004d9c120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004d9c100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc005f961cf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:01:10.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1356" for this suite. • [SLOW TEST:50.954 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":288,"completed":251,"skipped":4254,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:01:10.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 01:01:10.647: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-fa103f2a-d975-4213-a276-94c5e5cc1d57" in namespace "security-context-test-3330" to be "Succeeded or Failed" May 20 01:01:10.650: INFO: Pod "busybox-privileged-false-fa103f2a-d975-4213-a276-94c5e5cc1d57": Phase="Pending", Reason="", readiness=false. Elapsed: 3.453745ms May 20 01:01:12.655: INFO: Pod "busybox-privileged-false-fa103f2a-d975-4213-a276-94c5e5cc1d57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008190638s May 20 01:01:14.658: INFO: Pod "busybox-privileged-false-fa103f2a-d975-4213-a276-94c5e5cc1d57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011697232s May 20 01:01:14.658: INFO: Pod "busybox-privileged-false-fa103f2a-d975-4213-a276-94c5e5cc1d57" satisfied condition "Succeeded or Failed" May 20 01:01:14.675: INFO: Got logs for pod "busybox-privileged-false-fa103f2a-d975-4213-a276-94c5e5cc1d57": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:01:14.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3330" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":252,"skipped":4265,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:01:14.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-4936 STEP: creating replication controller nodeport-test in namespace services-4936 I0520 01:01:14.998247 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-4936, replica count: 2 I0520 01:01:18.048645 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 01:01:21.048874 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 01:01:21.048: INFO: Creating new exec pod May 20 01:01:26.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4936 execpodb9j46 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 20 01:01:26.329: INFO: stderr: "I0520 01:01:26.209673 3165 log.go:172] (0xc000ab9550) (0xc0008565a0) Create stream\nI0520 01:01:26.209742 3165 log.go:172] (0xc000ab9550) (0xc0008565a0) Stream added, broadcasting: 1\nI0520 01:01:26.221821 3165 log.go:172] (0xc000ab9550) Reply frame received for 1\nI0520 01:01:26.221877 3165 log.go:172] (0xc000ab9550) (0xc00085ce60) Create stream\nI0520 01:01:26.221893 3165 log.go:172] (0xc000ab9550) (0xc00085ce60) Stream added, broadcasting: 3\nI0520 01:01:26.223337 3165 log.go:172] (0xc000ab9550) Reply frame received for 3\nI0520 01:01:26.223391 3165 log.go:172] (0xc000ab9550) (0xc000535720) Create stream\nI0520 01:01:26.223412 3165 log.go:172] (0xc000ab9550) (0xc000535720) Stream added, broadcasting: 5\nI0520 01:01:26.224733 3165 log.go:172] (0xc000ab9550) Reply frame received for 5\nI0520 01:01:26.302574 3165 log.go:172] (0xc000ab9550) Data frame received for 5\nI0520 01:01:26.302606 3165 log.go:172] (0xc000535720) (5) Data frame handling\nI0520 01:01:26.302630 3165 log.go:172] (0xc000535720) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0520 01:01:26.320763 3165 log.go:172] (0xc000ab9550) Data frame received for 5\nI0520 01:01:26.320791 3165 log.go:172] (0xc000535720) (5) Data frame handling\nI0520 01:01:26.320836 3165 log.go:172] (0xc000535720) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0520 01:01:26.321508 3165 log.go:172] (0xc000ab9550) Data frame received for 5\nI0520 01:01:26.321548 3165 log.go:172] (0xc000ab9550) Data frame received for 3\nI0520 01:01:26.321607 3165 log.go:172] (0xc00085ce60) (3) Data frame handling\nI0520 01:01:26.321648 3165 log.go:172] (0xc000535720) (5) Data frame handling\nI0520 01:01:26.323286 3165 log.go:172] (0xc000ab9550) Data frame received for 1\nI0520 01:01:26.323321 3165 log.go:172] (0xc0008565a0) (1) Data frame handling\nI0520 01:01:26.323352 3165 log.go:172] (0xc0008565a0) (1) Data frame sent\nI0520 01:01:26.323382 3165 log.go:172] (0xc000ab9550) (0xc0008565a0) Stream removed, broadcasting: 1\nI0520 01:01:26.323412 3165 log.go:172] (0xc000ab9550) Go away received\nI0520 01:01:26.323855 3165 log.go:172] (0xc000ab9550) (0xc0008565a0) Stream removed, broadcasting: 1\nI0520 01:01:26.323880 3165 log.go:172] (0xc000ab9550) (0xc00085ce60) Stream removed, broadcasting: 3\nI0520 01:01:26.323894 3165 log.go:172] (0xc000ab9550) (0xc000535720) Stream removed, broadcasting: 5\n" May 20 01:01:26.329: INFO: stdout: "" May 20 01:01:26.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4936 execpodb9j46 -- /bin/sh -x -c nc -zv -t -w 2 10.108.124.155 80' May 20 01:01:26.528: INFO: stderr: "I0520 01:01:26.460644 3186 log.go:172] (0xc00098a6e0) (0xc000657c20) Create stream\nI0520 01:01:26.460698 3186 log.go:172] (0xc00098a6e0) (0xc000657c20) Stream added, broadcasting: 1\nI0520 01:01:26.463347 3186 log.go:172] (0xc00098a6e0) Reply frame received for 1\nI0520 01:01:26.463396 3186 log.go:172] (0xc00098a6e0) (0xc0005f25a0) Create stream\nI0520 01:01:26.463412 3186 log.go:172] (0xc00098a6e0) (0xc0005f25a0) Stream added, broadcasting: 3\nI0520 01:01:26.464182 3186 log.go:172] (0xc00098a6e0) Reply frame received for 3\nI0520 01:01:26.464212 3186 log.go:172] (0xc00098a6e0) (0xc00064ed20) Create stream\nI0520 01:01:26.464229 3186 log.go:172] (0xc00098a6e0) (0xc00064ed20) Stream added, broadcasting: 5\nI0520 01:01:26.465075 3186 log.go:172] (0xc00098a6e0) Reply frame received for 5\nI0520 01:01:26.520815 3186 log.go:172] (0xc00098a6e0) Data frame received for 5\nI0520 01:01:26.520854 3186 log.go:172] (0xc00064ed20) (5) Data frame handling\nI0520 01:01:26.520871 3186 log.go:172] (0xc00064ed20) (5) Data frame sent\nI0520 01:01:26.520880 3186 log.go:172] (0xc00098a6e0) Data frame received for 5\nI0520 01:01:26.520889 3186 log.go:172] (0xc00064ed20) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.124.155 80\nConnection to 10.108.124.155 80 port [tcp/http] succeeded!\nI0520 01:01:26.520911 3186 log.go:172] (0xc00098a6e0) Data frame received for 3\nI0520 01:01:26.520919 3186 log.go:172] (0xc0005f25a0) (3) Data frame handling\nI0520 01:01:26.522550 3186 log.go:172] (0xc00098a6e0) Data frame received for 1\nI0520 01:01:26.522575 3186 log.go:172] (0xc000657c20) (1) Data frame handling\nI0520 01:01:26.522587 3186 log.go:172] (0xc000657c20) (1) Data frame sent\nI0520 01:01:26.522615 3186 log.go:172] (0xc00098a6e0) (0xc000657c20) Stream removed, broadcasting: 1\nI0520 01:01:26.522650 3186 log.go:172] (0xc00098a6e0) Go away received\nI0520 01:01:26.523073 3186 log.go:172] (0xc00098a6e0) (0xc000657c20) Stream removed, broadcasting: 1\nI0520 01:01:26.523101 3186 log.go:172] (0xc00098a6e0) (0xc0005f25a0) Stream removed, broadcasting: 3\nI0520 01:01:26.523114 3186 log.go:172] (0xc00098a6e0) (0xc00064ed20) Stream removed, broadcasting: 5\n" May 20 01:01:26.528: INFO: stdout: "" May 20 01:01:26.528: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4936 execpodb9j46 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30661' May 20 01:01:26.725: INFO: stderr: "I0520 01:01:26.662561 3208 log.go:172] (0xc000b2c000) (0xc000528f00) Create stream\nI0520 01:01:26.662630 3208 log.go:172] (0xc000b2c000) (0xc000528f00) Stream added, broadcasting: 1\nI0520 01:01:26.665951 3208 log.go:172] (0xc000b2c000) Reply frame received for 1\nI0520 01:01:26.666013 3208 log.go:172] (0xc000b2c000) (0xc0004f2140) Create stream\nI0520 01:01:26.666033 3208 log.go:172] (0xc000b2c000) (0xc0004f2140) Stream added, broadcasting: 3\nI0520 01:01:26.667098 3208 log.go:172] (0xc000b2c000) Reply frame received for 3\nI0520 01:01:26.667134 3208 log.go:172] (0xc000b2c000) (0xc0004f30e0) Create stream\nI0520 01:01:26.667142 3208 log.go:172] (0xc000b2c000) (0xc0004f30e0) Stream added, broadcasting: 5\nI0520 01:01:26.668155 3208 log.go:172] (0xc000b2c000) Reply frame received for 5\nI0520 01:01:26.718285 3208 log.go:172] (0xc000b2c000) Data frame received for 3\nI0520 01:01:26.718323 3208 log.go:172] (0xc0004f2140) (3) Data frame handling\nI0520 01:01:26.718559 3208 log.go:172] (0xc000b2c000) Data frame received for 5\nI0520 01:01:26.718585 3208 log.go:172] (0xc0004f30e0) (5) Data frame handling\nI0520 01:01:26.718604 3208 log.go:172] (0xc0004f30e0) (5) Data frame sent\nI0520 01:01:26.718615 3208 log.go:172] (0xc000b2c000) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.13 30661\nConnection to 172.17.0.13 30661 port [tcp/30661] succeeded!\nI0520 01:01:26.718624 3208 log.go:172] (0xc0004f30e0) (5) Data frame handling\nI0520 01:01:26.720211 3208 log.go:172] (0xc000b2c000) Data frame received for 1\nI0520 01:01:26.720250 3208 log.go:172] (0xc000528f00) (1) Data frame handling\nI0520 01:01:26.720300 3208 log.go:172] (0xc000528f00) (1) Data frame sent\nI0520 01:01:26.720320 3208 log.go:172] (0xc000b2c000) (0xc000528f00) Stream removed, broadcasting: 1\nI0520 01:01:26.720360 3208 log.go:172] (0xc000b2c000) Go away received\nI0520 01:01:26.720733 3208 log.go:172] (0xc000b2c000) (0xc000528f00) Stream removed, broadcasting: 1\nI0520 01:01:26.720754 3208 log.go:172] (0xc000b2c000) (0xc0004f2140) Stream removed, broadcasting: 3\nI0520 01:01:26.720764 3208 log.go:172] (0xc000b2c000) (0xc0004f30e0) Stream removed, broadcasting: 5\n" May 20 01:01:26.725: INFO: stdout: "" May 20 01:01:26.725: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4936 execpodb9j46 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30661' May 20 01:01:26.922: INFO: stderr: "I0520 01:01:26.855348 3228 log.go:172] (0xc000a4f290) (0xc000b4e500) Create stream\nI0520 01:01:26.855435 3228 log.go:172] (0xc000a4f290) (0xc000b4e500) Stream added, broadcasting: 1\nI0520 01:01:26.858657 3228 log.go:172] (0xc000a4f290) Reply frame received for 1\nI0520 01:01:26.858717 3228 log.go:172] (0xc000a4f290) (0xc000aea140) Create stream\nI0520 01:01:26.858732 3228 log.go:172] (0xc000a4f290) (0xc000aea140) Stream added, broadcasting: 3\nI0520 01:01:26.859681 3228 log.go:172] (0xc000a4f290) Reply frame received for 3\nI0520 01:01:26.859733 3228 log.go:172] (0xc000a4f290) (0xc000ac01e0) Create stream\nI0520 01:01:26.859749 3228 log.go:172] (0xc000a4f290) (0xc000ac01e0) Stream added, broadcasting: 5\nI0520 01:01:26.860464 3228 log.go:172] (0xc000a4f290) Reply frame received for 5\nI0520 01:01:26.915391 3228 log.go:172] (0xc000a4f290) Data frame received for 3\nI0520 01:01:26.915436 3228 log.go:172] (0xc000aea140) (3) Data frame handling\nI0520 01:01:26.915466 3228 log.go:172] (0xc000a4f290) Data frame received for 5\nI0520 01:01:26.915489 3228 log.go:172] (0xc000ac01e0) (5) Data frame handling\nI0520 01:01:26.915510 3228 log.go:172] (0xc000ac01e0) (5) Data frame sent\nI0520 01:01:26.915529 3228 log.go:172] (0xc000a4f290) Data frame received for 5\nI0520 01:01:26.915544 3228 log.go:172] (0xc000ac01e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30661\nConnection to 172.17.0.12 30661 port [tcp/30661] succeeded!\nI0520 01:01:26.917272 3228 log.go:172] (0xc000a4f290) Data frame received for 1\nI0520 01:01:26.917355 3228 log.go:172] (0xc000b4e500) (1) Data frame handling\nI0520 01:01:26.917370 3228 log.go:172] (0xc000b4e500) (1) Data frame sent\nI0520 01:01:26.917379 3228 log.go:172] (0xc000a4f290) (0xc000b4e500) Stream removed, broadcasting: 1\nI0520 01:01:26.917558 3228 log.go:172] (0xc000a4f290) Go away received\nI0520 01:01:26.917744 3228 log.go:172] (0xc000a4f290) (0xc000b4e500) Stream removed, broadcasting: 1\nI0520 01:01:26.917756 3228 log.go:172] (0xc000a4f290) (0xc000aea140) Stream removed, broadcasting: 3\nI0520 01:01:26.917766 3228 log.go:172] (0xc000a4f290) (0xc000ac01e0) Stream removed, broadcasting: 5\n" May 20 01:01:26.922: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:01:26.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4936" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.248 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":288,"completed":253,"skipped":4285,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:01:26.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 20 01:01:27.079: INFO: Waiting up to 5m0s for pod "pod-e6309ecf-c8e1-4092-87a7-200b426a72de" in namespace "emptydir-2804" to be "Succeeded or Failed" May 20 01:01:27.082: INFO: Pod "pod-e6309ecf-c8e1-4092-87a7-200b426a72de": Phase="Pending", Reason="", readiness=false. Elapsed: 3.541637ms May 20 01:01:29.090: INFO: Pod "pod-e6309ecf-c8e1-4092-87a7-200b426a72de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011512043s May 20 01:01:31.093: INFO: Pod "pod-e6309ecf-c8e1-4092-87a7-200b426a72de": Phase="Running", Reason="", readiness=true. Elapsed: 4.014670888s May 20 01:01:33.097: INFO: Pod "pod-e6309ecf-c8e1-4092-87a7-200b426a72de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018488447s STEP: Saw pod success May 20 01:01:33.097: INFO: Pod "pod-e6309ecf-c8e1-4092-87a7-200b426a72de" satisfied condition "Succeeded or Failed" May 20 01:01:33.100: INFO: Trying to get logs from node latest-worker2 pod pod-e6309ecf-c8e1-4092-87a7-200b426a72de container test-container: STEP: delete the pod May 20 01:01:33.170: INFO: Waiting for pod pod-e6309ecf-c8e1-4092-87a7-200b426a72de to disappear May 20 01:01:33.182: INFO: Pod pod-e6309ecf-c8e1-4092-87a7-200b426a72de no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:01:33.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2804" for this suite. • [SLOW TEST:6.258 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":254,"skipped":4289,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:01:33.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 01:01:33.257: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:01:39.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3889" for this suite. • [SLOW TEST:6.364 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":288,"completed":255,"skipped":4306,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:01:39.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-415, will wait for the garbage collector to delete the pods May 20 01:01:45.673: INFO: Deleting Job.batch foo took: 6.180062ms May 20 01:02:09.774: INFO: Terminating Job.batch foo pods took: 24.100320741s STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:02:45.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-415" for this suite. • [SLOW TEST:65.858 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":288,"completed":256,"skipped":4320,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:02:45.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 20 01:02:45.475: INFO: Waiting up to 5m0s for pod "pod-e8df15e3-dc5c-43f6-8421-7e0b1b5d4e71" in namespace "emptydir-4028" to be "Succeeded or Failed" May 20 01:02:45.495: INFO: Pod "pod-e8df15e3-dc5c-43f6-8421-7e0b1b5d4e71": Phase="Pending", Reason="", readiness=false. Elapsed: 19.296006ms May 20 01:02:47.498: INFO: Pod "pod-e8df15e3-dc5c-43f6-8421-7e0b1b5d4e71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02304419s May 20 01:02:49.503: INFO: Pod "pod-e8df15e3-dc5c-43f6-8421-7e0b1b5d4e71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027463366s STEP: Saw pod success May 20 01:02:49.503: INFO: Pod "pod-e8df15e3-dc5c-43f6-8421-7e0b1b5d4e71" satisfied condition "Succeeded or Failed" May 20 01:02:49.506: INFO: Trying to get logs from node latest-worker pod pod-e8df15e3-dc5c-43f6-8421-7e0b1b5d4e71 container test-container: STEP: delete the pod May 20 01:02:49.577: INFO: Waiting for pod pod-e8df15e3-dc5c-43f6-8421-7e0b1b5d4e71 to disappear May 20 01:02:49.586: INFO: Pod pod-e8df15e3-dc5c-43f6-8421-7e0b1b5d4e71 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:02:49.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4028" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":257,"skipped":4324,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:02:49.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-5c1ddd79-adba-4f12-beb8-477ac7802821 STEP: Creating a pod to test consume configMaps May 20 01:02:49.706: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b92ab5fc-43f8-4a04-bc86-df74dcd78b9e" in namespace "projected-6907" to be "Succeeded or Failed" May 20 01:02:49.726: INFO: Pod "pod-projected-configmaps-b92ab5fc-43f8-4a04-bc86-df74dcd78b9e": Phase="Pending", Reason="", readiness=false. Elapsed: 19.012222ms May 20 01:02:51.795: INFO: Pod "pod-projected-configmaps-b92ab5fc-43f8-4a04-bc86-df74dcd78b9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088156435s May 20 01:02:53.799: INFO: Pod "pod-projected-configmaps-b92ab5fc-43f8-4a04-bc86-df74dcd78b9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092164796s STEP: Saw pod success May 20 01:02:53.799: INFO: Pod "pod-projected-configmaps-b92ab5fc-43f8-4a04-bc86-df74dcd78b9e" satisfied condition "Succeeded or Failed" May 20 01:02:53.801: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-b92ab5fc-43f8-4a04-bc86-df74dcd78b9e container projected-configmap-volume-test: STEP: delete the pod May 20 01:02:54.052: INFO: Waiting for pod pod-projected-configmaps-b92ab5fc-43f8-4a04-bc86-df74dcd78b9e to disappear May 20 01:02:54.091: INFO: Pod pod-projected-configmaps-b92ab5fc-43f8-4a04-bc86-df74dcd78b9e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:02:54.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6907" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":258,"skipped":4332,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:02:54.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6014 STEP: creating service affinity-nodeport in namespace services-6014 STEP: creating replication controller affinity-nodeport in namespace services-6014 I0520 01:02:54.246025 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-6014, replica count: 3 I0520 01:02:57.296477 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 01:03:00.296749 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 01:03:00.308: INFO: Creating new exec pod May 20 01:03:05.334: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6014 execpod-affinitytcz4w -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' May 20 01:03:05.691: INFO: stderr: "I0520 01:03:05.627255 3248 log.go:172] (0xc0006f16b0) (0xc000ace1e0) Create stream\nI0520 01:03:05.627349 3248 log.go:172] (0xc0006f16b0) (0xc000ace1e0) Stream added, broadcasting: 1\nI0520 01:03:05.631782 3248 log.go:172] (0xc0006f16b0) Reply frame received for 1\nI0520 01:03:05.631826 3248 log.go:172] (0xc0006f16b0) (0xc000734f00) Create stream\nI0520 01:03:05.631838 3248 log.go:172] (0xc0006f16b0) (0xc000734f00) Stream added, broadcasting: 3\nI0520 01:03:05.632893 3248 log.go:172] (0xc0006f16b0) Reply frame received for 3\nI0520 01:03:05.632939 3248 log.go:172] (0xc0006f16b0) (0xc00072a640) Create stream\nI0520 01:03:05.632952 3248 log.go:172] (0xc0006f16b0) (0xc00072a640) Stream added, broadcasting: 5\nI0520 01:03:05.634048 3248 log.go:172] (0xc0006f16b0) Reply frame received for 5\nI0520 01:03:05.682350 3248 log.go:172] (0xc0006f16b0) Data frame received for 5\nI0520 01:03:05.682376 3248 log.go:172] (0xc00072a640) (5) Data frame handling\nI0520 01:03:05.682393 3248 log.go:172] (0xc00072a640) (5) Data frame sent\nI0520 01:03:05.682401 3248 log.go:172] (0xc0006f16b0) Data frame received for 5\nI0520 01:03:05.682407 3248 log.go:172] (0xc00072a640) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0520 01:03:05.682423 3248 log.go:172] (0xc00072a640) (5) Data frame sent\nI0520 01:03:05.682670 3248 log.go:172] (0xc0006f16b0) Data frame received for 5\nI0520 01:03:05.682698 3248 log.go:172] (0xc00072a640) (5) Data frame handling\nI0520 01:03:05.682869 3248 log.go:172] (0xc0006f16b0) Data frame received for 3\nI0520 01:03:05.682890 3248 log.go:172] (0xc000734f00) (3) Data frame handling\nI0520 01:03:05.684484 3248 log.go:172] (0xc0006f16b0) Data frame received for 1\nI0520 01:03:05.684501 3248 log.go:172] (0xc000ace1e0) (1) Data frame handling\nI0520 01:03:05.684517 3248 log.go:172] (0xc000ace1e0) (1) Data frame sent\nI0520 01:03:05.684580 3248 log.go:172] (0xc0006f16b0) (0xc000ace1e0) Stream removed, broadcasting: 1\nI0520 01:03:05.684647 3248 log.go:172] (0xc0006f16b0) Go away received\nI0520 01:03:05.684917 3248 log.go:172] (0xc0006f16b0) (0xc000ace1e0) Stream removed, broadcasting: 1\nI0520 01:03:05.684933 3248 log.go:172] (0xc0006f16b0) (0xc000734f00) Stream removed, broadcasting: 3\nI0520 01:03:05.684942 3248 log.go:172] (0xc0006f16b0) (0xc00072a640) Stream removed, broadcasting: 5\n" May 20 01:03:05.691: INFO: stdout: "" May 20 01:03:05.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6014 execpod-affinitytcz4w -- /bin/sh -x -c nc -zv -t -w 2 10.110.72.236 80' May 20 01:03:05.898: INFO: stderr: "I0520 01:03:05.816488 3271 log.go:172] (0xc000a711e0) (0xc00070c6e0) Create stream\nI0520 01:03:05.816537 3271 log.go:172] (0xc000a711e0) (0xc00070c6e0) Stream added, broadcasting: 1\nI0520 01:03:05.818781 3271 log.go:172] (0xc000a711e0) Reply frame received for 1\nI0520 01:03:05.818826 3271 log.go:172] (0xc000a711e0) (0xc000714000) Create stream\nI0520 01:03:05.818839 3271 log.go:172] (0xc000a711e0) (0xc000714000) Stream added, broadcasting: 3\nI0520 01:03:05.819550 3271 log.go:172] (0xc000a711e0) Reply frame received for 3\nI0520 01:03:05.819578 3271 log.go:172] (0xc000a711e0) (0xc00070d040) Create stream\nI0520 01:03:05.819591 3271 log.go:172] (0xc000a711e0) (0xc00070d040) Stream added, broadcasting: 5\nI0520 01:03:05.820468 3271 log.go:172] (0xc000a711e0) Reply frame received for 5\nI0520 01:03:05.890226 3271 log.go:172] (0xc000a711e0) Data frame received for 5\nI0520 01:03:05.890259 3271 log.go:172] (0xc00070d040) (5) Data frame handling\nI0520 01:03:05.890269 3271 log.go:172] (0xc00070d040) (5) Data frame sent\nI0520 01:03:05.890275 3271 log.go:172] (0xc000a711e0) Data frame received for 5\nI0520 01:03:05.890281 3271 log.go:172] (0xc00070d040) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.72.236 80\nConnection to 10.110.72.236 80 port [tcp/http] succeeded!\nI0520 01:03:05.890299 3271 log.go:172] (0xc000a711e0) Data frame received for 3\nI0520 01:03:05.890307 3271 log.go:172] (0xc000714000) (3) Data frame handling\nI0520 01:03:05.891852 3271 log.go:172] (0xc000a711e0) Data frame received for 1\nI0520 01:03:05.891928 3271 log.go:172] (0xc00070c6e0) (1) Data frame handling\nI0520 01:03:05.891953 3271 log.go:172] (0xc00070c6e0) (1) Data frame sent\nI0520 01:03:05.891968 3271 log.go:172] (0xc000a711e0) (0xc00070c6e0) Stream removed, broadcasting: 1\nI0520 01:03:05.891991 3271 log.go:172] (0xc000a711e0) Go away received\nI0520 01:03:05.892349 3271 log.go:172] (0xc000a711e0) (0xc00070c6e0) Stream removed, broadcasting: 1\nI0520 01:03:05.892370 3271 log.go:172] (0xc000a711e0) (0xc000714000) Stream removed, broadcasting: 3\nI0520 01:03:05.892378 3271 log.go:172] (0xc000a711e0) (0xc00070d040) Stream removed, broadcasting: 5\n" May 20 01:03:05.898: INFO: stdout: "" May 20 01:03:05.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6014 execpod-affinitytcz4w -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31378' May 20 01:03:06.096: INFO: stderr: "I0520 01:03:06.018384 3290 log.go:172] (0xc000a078c0) (0xc000c16460) Create stream\nI0520 01:03:06.018434 3290 log.go:172] (0xc000a078c0) (0xc000c16460) Stream added, broadcasting: 1\nI0520 01:03:06.022757 3290 log.go:172] (0xc000a078c0) Reply frame received for 1\nI0520 01:03:06.022802 3290 log.go:172] (0xc000a078c0) (0xc0006a8e60) Create stream\nI0520 01:03:06.022817 3290 log.go:172] (0xc000a078c0) (0xc0006a8e60) Stream added, broadcasting: 3\nI0520 01:03:06.023701 3290 log.go:172] (0xc000a078c0) Reply frame received for 3\nI0520 01:03:06.023738 3290 log.go:172] (0xc000a078c0) (0xc0006945a0) Create stream\nI0520 01:03:06.023752 3290 log.go:172] (0xc000a078c0) (0xc0006945a0) Stream added, broadcasting: 5\nI0520 01:03:06.024735 3290 log.go:172] (0xc000a078c0) Reply frame received for 5\nI0520 01:03:06.090068 3290 log.go:172] (0xc000a078c0) Data frame received for 5\nI0520 01:03:06.090103 3290 log.go:172] (0xc0006945a0) (5) Data frame handling\nI0520 01:03:06.090139 3290 log.go:172] (0xc0006945a0) (5) Data frame sent\nI0520 01:03:06.090162 3290 log.go:172] (0xc000a078c0) Data frame received for 5\nI0520 01:03:06.090177 3290 log.go:172] (0xc0006945a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31378\nConnection to 172.17.0.13 31378 port [tcp/31378] succeeded!\nI0520 01:03:06.090598 3290 log.go:172] (0xc000a078c0) Data frame received for 3\nI0520 01:03:06.090620 3290 log.go:172] (0xc0006a8e60) (3) Data frame handling\nI0520 01:03:06.091981 3290 log.go:172] (0xc000a078c0) Data frame received for 1\nI0520 01:03:06.092060 3290 log.go:172] (0xc000c16460) (1) Data frame handling\nI0520 01:03:06.092090 3290 log.go:172] (0xc000c16460) (1) Data frame sent\nI0520 01:03:06.092109 3290 log.go:172] (0xc000a078c0) (0xc000c16460) Stream removed, broadcasting: 1\nI0520 01:03:06.092127 3290 log.go:172] (0xc000a078c0) Go away received\nI0520 01:03:06.092502 3290 log.go:172] (0xc000a078c0) (0xc000c16460) Stream removed, broadcasting: 1\nI0520 01:03:06.092524 3290 log.go:172] (0xc000a078c0) (0xc0006a8e60) Stream removed, broadcasting: 3\nI0520 01:03:06.092536 3290 log.go:172] (0xc000a078c0) (0xc0006945a0) Stream removed, broadcasting: 5\n" May 20 01:03:06.096: INFO: stdout: "" May 20 01:03:06.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6014 execpod-affinitytcz4w -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31378' May 20 01:03:06.322: INFO: stderr: "I0520 01:03:06.243738 3311 log.go:172] (0xc0007d68f0) (0xc0004f6140) Create stream\nI0520 01:03:06.243801 3311 log.go:172] (0xc0007d68f0) (0xc0004f6140) Stream added, broadcasting: 1\nI0520 01:03:06.246545 3311 log.go:172] (0xc0007d68f0) Reply frame received for 1\nI0520 01:03:06.246584 3311 log.go:172] (0xc0007d68f0) (0xc00043cc80) Create stream\nI0520 01:03:06.246601 3311 log.go:172] (0xc0007d68f0) (0xc00043cc80) Stream added, broadcasting: 3\nI0520 01:03:06.247596 3311 log.go:172] (0xc0007d68f0) Reply frame received for 3\nI0520 01:03:06.247622 3311 log.go:172] (0xc0007d68f0) (0xc000139540) Create stream\nI0520 01:03:06.247631 3311 log.go:172] (0xc0007d68f0) (0xc000139540) Stream added, broadcasting: 5\nI0520 01:03:06.248549 3311 log.go:172] (0xc0007d68f0) Reply frame received for 5\nI0520 01:03:06.315298 3311 log.go:172] (0xc0007d68f0) Data frame received for 3\nI0520 01:03:06.315336 3311 log.go:172] (0xc00043cc80) (3) Data frame handling\nI0520 01:03:06.315355 3311 log.go:172] (0xc0007d68f0) Data frame received for 5\nI0520 01:03:06.315365 3311 log.go:172] (0xc000139540) (5) Data frame handling\nI0520 01:03:06.315373 3311 log.go:172] (0xc000139540) (5) Data frame sent\nI0520 01:03:06.315379 3311 log.go:172] (0xc0007d68f0) Data frame received for 5\nI0520 01:03:06.315384 3311 log.go:172] (0xc000139540) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31378\nConnection to 172.17.0.12 31378 port [tcp/31378] succeeded!\nI0520 01:03:06.316794 3311 log.go:172] (0xc0007d68f0) Data frame received for 1\nI0520 01:03:06.316815 3311 log.go:172] (0xc0004f6140) (1) Data frame handling\nI0520 01:03:06.316853 3311 log.go:172] (0xc0004f6140) (1) Data frame sent\nI0520 01:03:06.316874 3311 log.go:172] (0xc0007d68f0) (0xc0004f6140) Stream removed, broadcasting: 1\nI0520 01:03:06.317339 3311 log.go:172] (0xc0007d68f0) (0xc0004f6140) Stream removed, broadcasting: 1\nI0520 01:03:06.317357 3311 log.go:172] (0xc0007d68f0) (0xc00043cc80) Stream removed, broadcasting: 3\nI0520 01:03:06.317515 3311 log.go:172] (0xc0007d68f0) (0xc000139540) Stream removed, broadcasting: 5\n" May 20 01:03:06.322: INFO: stdout: "" May 20 01:03:06.322: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6014 execpod-affinitytcz4w -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31378/ ; done' May 20 01:03:06.610: INFO: stderr: "I0520 01:03:06.456577 3331 log.go:172] (0xc000add1e0) (0xc0009ce5a0) Create stream\nI0520 01:03:06.456632 3331 log.go:172] (0xc000add1e0) (0xc0009ce5a0) Stream added, broadcasting: 1\nI0520 01:03:06.462182 3331 log.go:172] (0xc000add1e0) Reply frame received for 1\nI0520 01:03:06.462223 3331 log.go:172] (0xc000add1e0) (0xc000526e60) Create stream\nI0520 01:03:06.462234 3331 log.go:172] (0xc000add1e0) (0xc000526e60) Stream added, broadcasting: 3\nI0520 01:03:06.463290 3331 log.go:172] (0xc000add1e0) Reply frame received for 3\nI0520 01:03:06.463345 3331 log.go:172] (0xc000add1e0) (0xc00034a140) Create stream\nI0520 01:03:06.463370 3331 log.go:172] (0xc000add1e0) (0xc00034a140) Stream added, broadcasting: 5\nI0520 01:03:06.464258 3331 log.go:172] (0xc000add1e0) Reply frame received for 5\nI0520 01:03:06.521015 3331 log.go:172] (0xc000add1e0) Data frame received for 5\nI0520 01:03:06.521052 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.521062 3331 log.go:172] (0xc00034a140) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31378/\nI0520 01:03:06.521086 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.521321 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.521360 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.523972 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.523999 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.524025 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.524372 3331 log.go:172] (0xc000add1e0) Data frame received for 5\nI0520 01:03:06.524397 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.524404 3331 log.go:172] (0xc00034a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31378/\nI0520 01:03:06.524412 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.524417 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.524421 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.531322 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.531357 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.531397 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.531896 3331 log.go:172] (0xc000add1e0) Data frame received for 5\nI0520 01:03:06.531940 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.531966 3331 log.go:172] (0xc00034a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31378/\nI0520 01:03:06.531991 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.532009 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.532026 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.536166 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.536203 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.536225 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.536664 3331 log.go:172] (0xc000add1e0) Data frame received for 5\nI0520 01:03:06.536689 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.536705 3331 log.go:172] (0xc00034a140) (5) Data frame sent\n+ echo\nI0520 01:03:06.536773 3331 log.go:172] (0xc000add1e0) Data frame received for 5\nI0520 01:03:06.536790 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.536806 3331 log.go:172] (0xc00034a140) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31378/\nI0520 01:03:06.536822 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.536831 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.536843 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.541676 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.541704 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.541725 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.542061 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.542097 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.542118 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.542145 3331 log.go:172] (0xc000add1e0) Data frame received for 5\nI0520 01:03:06.542157 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.542177 3331 log.go:172] (0xc00034a140) (5) Data frame sent\nI0520 01:03:06.542189 3331 log.go:172] (0xc000add1e0) Data frame received for 5\nI0520 01:03:06.542204 3331 log.go:172] (0xc00034a140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31378/\nI0520 01:03:06.542235 3331 log.go:172] (0xc00034a140) (5) Data frame sent\nI0520 01:03:06.546779 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.546797 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.546811 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.547260 3331 log.go:172] (0xc000add1e0) Data frame received for 5\nI0520 01:03:06.547279 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.547305 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.547316 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.547333 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.547342 3331 log.go:172] (0xc00034a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31378/\nI0520 01:03:06.551600 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.551623 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.551639 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.552001 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.552028 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.552043 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.552060 3331 log.go:172] (0xc000add1e0) Data frame received for 5\nI0520 01:03:06.552073 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.552081 3331 log.go:172] (0xc00034a140) (5) Data frame sent\nI0520 01:03:06.552088 3331 log.go:172] (0xc000add1e0) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2I0520 01:03:06.552094 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.552118 3331 log.go:172] (0xc00034a140) (5) Data frame sent\n http://172.17.0.13:31378/\nI0520 01:03:06.556446 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.556466 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.556484 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.556899 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.556921 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.556932 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.556948 3331 log.go:172] (0xc000add1e0) Data frame received for 5\nI0520 01:03:06.556956 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.556966 3331 log.go:172] (0xc00034a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31378/\nI0520 01:03:06.560801 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.560821 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.560836 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.561386 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.561419 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.561435 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.561457 3331 log.go:172] (0xc000add1e0) Data frame received for 5\nI0520 01:03:06.561470 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.561487 3331 log.go:172] (0xc00034a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31378/\nI0520 01:03:06.565654 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.565672 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.565680 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.565997 3331 log.go:172] (0xc000add1e0) Data frame received for 5\nI0520 01:03:06.566010 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.566018 3331 log.go:172] (0xc00034a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31378/\nI0520 01:03:06.566029 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.566054 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.566088 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.570999 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.571013 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.571020 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.571556 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.571584 3331 log.go:172] (0xc000add1e0) Data frame received for 5\nI0520 01:03:06.571620 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.571640 3331 log.go:172] (0xc00034a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31378/\nI0520 01:03:06.571658 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.571679 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.575434 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.575462 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.575490 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.575954 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.575983 3331 log.go:172] (0xc000add1e0) Data frame received for 5\nI0520 01:03:06.576018 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.576040 3331 log.go:172] (0xc00034a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31378/\nI0520 01:03:06.576073 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.576115 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.579705 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.579731 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.579759 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.580200 3331 log.go:172] (0xc000add1e0) Data frame received for 5\nI0520 01:03:06.580223 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.580237 3331 log.go:172] (0xc00034a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31378/\nI0520 01:03:06.580254 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.580264 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.580274 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.584664 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.584679 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.584686 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.585656 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.585687 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.585701 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.585722 3331 log.go:172] (0xc000add1e0) Data frame received for 5\nI0520 01:03:06.585735 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.585746 3331 log.go:172] (0xc00034a140) (5) Data frame sent\nI0520 01:03:06.585766 3331 log.go:172] (0xc000add1e0) Data frame received for 5\n+ echo\nI0520 01:03:06.585778 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.585817 3331 log.go:172] (0xc00034a140) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31378/\nI0520 01:03:06.591854 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.591902 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.591926 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.593586 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.593623 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.593647 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.593679 3331 log.go:172] (0xc000add1e0) Data frame received for 5\nI0520 01:03:06.593698 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.593729 3331 log.go:172] (0xc00034a140) (5) Data frame sent\n+ I0520 01:03:06.593747 3331 log.go:172] (0xc000add1e0) Data frame received for 5\nI0520 01:03:06.593775 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.593795 3331 log.go:172] (0xc00034a140) (5) Data frame sent\necho\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31378/\nI0520 01:03:06.598769 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.598788 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.598801 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.599085 3331 log.go:172] (0xc000add1e0) Data frame received for 5\nI0520 01:03:06.599100 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.599109 3331 log.go:172] (0xc00034a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31378/\nI0520 01:03:06.599262 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.599271 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.599283 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.603163 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.603183 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.603198 3331 log.go:172] (0xc000526e60) (3) Data frame sent\nI0520 01:03:06.604165 3331 log.go:172] (0xc000add1e0) Data frame received for 5\nI0520 01:03:06.604197 3331 log.go:172] (0xc00034a140) (5) Data frame handling\nI0520 01:03:06.604235 3331 log.go:172] (0xc000add1e0) Data frame received for 3\nI0520 01:03:06.604250 3331 log.go:172] (0xc000526e60) (3) Data frame handling\nI0520 01:03:06.606179 3331 log.go:172] (0xc000add1e0) Data frame received for 1\nI0520 01:03:06.606200 3331 log.go:172] (0xc0009ce5a0) (1) Data frame handling\nI0520 01:03:06.606210 3331 log.go:172] (0xc0009ce5a0) (1) Data frame sent\nI0520 01:03:06.606218 3331 log.go:172] (0xc000add1e0) (0xc0009ce5a0) Stream removed, broadcasting: 1\nI0520 01:03:06.606226 3331 log.go:172] (0xc000add1e0) Go away received\nI0520 01:03:06.606487 3331 log.go:172] (0xc000add1e0) (0xc0009ce5a0) Stream removed, broadcasting: 1\nI0520 01:03:06.606504 3331 log.go:172] (0xc000add1e0) (0xc000526e60) Stream removed, broadcasting: 3\nI0520 01:03:06.606516 3331 log.go:172] (0xc000add1e0) (0xc00034a140) Stream removed, broadcasting: 5\n" May 20 01:03:06.611: INFO: stdout: "\naffinity-nodeport-rfpd7\naffinity-nodeport-rfpd7\naffinity-nodeport-rfpd7\naffinity-nodeport-rfpd7\naffinity-nodeport-rfpd7\naffinity-nodeport-rfpd7\naffinity-nodeport-rfpd7\naffinity-nodeport-rfpd7\naffinity-nodeport-rfpd7\naffinity-nodeport-rfpd7\naffinity-nodeport-rfpd7\naffinity-nodeport-rfpd7\naffinity-nodeport-rfpd7\naffinity-nodeport-rfpd7\naffinity-nodeport-rfpd7\naffinity-nodeport-rfpd7" May 20 01:03:06.611: INFO: Received response from host: May 20 01:03:06.611: INFO: Received response from host: affinity-nodeport-rfpd7 May 20 01:03:06.611: INFO: Received response from host: affinity-nodeport-rfpd7 May 20 01:03:06.611: INFO: Received response from host: affinity-nodeport-rfpd7 May 20 01:03:06.611: INFO: Received response from host: affinity-nodeport-rfpd7 May 20 01:03:06.611: INFO: Received response from host: affinity-nodeport-rfpd7 May 20 01:03:06.611: INFO: Received response from host: affinity-nodeport-rfpd7 May 20 01:03:06.611: INFO: Received response from host: affinity-nodeport-rfpd7 May 20 01:03:06.611: INFO: Received response from host: affinity-nodeport-rfpd7 May 20 01:03:06.611: INFO: Received response from host: affinity-nodeport-rfpd7 May 20 01:03:06.611: INFO: Received response from host: affinity-nodeport-rfpd7 May 20 01:03:06.611: INFO: Received response from host: affinity-nodeport-rfpd7 May 20 01:03:06.611: INFO: Received response from host: affinity-nodeport-rfpd7 May 20 01:03:06.611: INFO: Received response from host: affinity-nodeport-rfpd7 May 20 01:03:06.611: INFO: Received response from host: affinity-nodeport-rfpd7 May 20 01:03:06.611: INFO: Received response from host: affinity-nodeport-rfpd7 May 20 01:03:06.611: INFO: Received response from host: affinity-nodeport-rfpd7 May 20 01:03:06.611: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-6014, will wait for the garbage collector to delete the pods May 20 01:03:06.722: INFO: Deleting ReplicationController affinity-nodeport took: 7.005212ms May 20 01:03:07.122: INFO: Terminating ReplicationController affinity-nodeport pods took: 400.312604ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:03:15.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6014" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:21.325 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":259,"skipped":4355,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:03:15.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:03:20.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6473" for this suite. • [SLOW TEST:5.056 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":288,"completed":260,"skipped":4385,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:03:20.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 20 01:03:20.575: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:03:36.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7278" for this suite. • [SLOW TEST:15.641 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":288,"completed":261,"skipped":4390,"failed":0} SSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:03:36.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8993 STEP: creating service affinity-clusterip in namespace services-8993 STEP: creating replication controller affinity-clusterip in namespace services-8993 I0520 01:03:36.313907 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-8993, replica count: 3 I0520 01:03:39.364299 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 01:03:42.364483 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 01:03:42.371: INFO: Creating new exec pod May 20 01:03:47.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8993 execpod-affinity2mnqc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' May 20 01:03:47.584: INFO: stderr: "I0520 01:03:47.516695 3358 log.go:172] (0xc000944bb0) (0xc000698a00) Create stream\nI0520 01:03:47.516749 3358 log.go:172] (0xc000944bb0) (0xc000698a00) Stream added, broadcasting: 1\nI0520 01:03:47.523733 3358 log.go:172] (0xc000944bb0) Reply frame received for 1\nI0520 01:03:47.523822 3358 log.go:172] (0xc000944bb0) (0xc000b0a820) Create stream\nI0520 01:03:47.523860 3358 log.go:172] (0xc000944bb0) (0xc000b0a820) Stream added, broadcasting: 3\nI0520 01:03:47.524864 3358 log.go:172] (0xc000944bb0) Reply frame received for 3\nI0520 01:03:47.524893 3358 log.go:172] (0xc000944bb0) (0xc000b0a8c0) Create stream\nI0520 01:03:47.524904 3358 log.go:172] (0xc000944bb0) (0xc000b0a8c0) Stream added, broadcasting: 5\nI0520 01:03:47.526349 3358 log.go:172] (0xc000944bb0) Reply frame received for 5\nI0520 01:03:47.579513 3358 log.go:172] (0xc000944bb0) Data frame received for 5\nI0520 01:03:47.579544 3358 log.go:172] (0xc000b0a8c0) (5) Data frame handling\nI0520 01:03:47.579559 3358 log.go:172] (0xc000b0a8c0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0520 01:03:47.580087 3358 log.go:172] (0xc000944bb0) Data frame received for 5\nI0520 01:03:47.580116 3358 log.go:172] (0xc000b0a8c0) (5) Data frame handling\nI0520 01:03:47.580134 3358 log.go:172] (0xc000b0a8c0) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0520 01:03:47.580406 3358 log.go:172] (0xc000944bb0) Data frame received for 3\nI0520 01:03:47.580427 3358 log.go:172] (0xc000b0a820) (3) Data frame handling\nI0520 01:03:47.580508 3358 log.go:172] (0xc000944bb0) Data frame received for 5\nI0520 01:03:47.580520 3358 log.go:172] (0xc000b0a8c0) (5) Data frame handling\nI0520 01:03:47.581905 3358 log.go:172] (0xc000944bb0) Data frame received for 1\nI0520 01:03:47.581922 3358 log.go:172] (0xc000698a00) (1) Data frame handling\nI0520 01:03:47.581929 3358 log.go:172] (0xc000698a00) (1) Data frame sent\nI0520 01:03:47.581937 3358 log.go:172] (0xc000944bb0) (0xc000698a00) Stream removed, broadcasting: 1\nI0520 01:03:47.581947 3358 log.go:172] (0xc000944bb0) Go away received\nI0520 01:03:47.582207 3358 log.go:172] (0xc000944bb0) (0xc000698a00) Stream removed, broadcasting: 1\nI0520 01:03:47.582222 3358 log.go:172] (0xc000944bb0) (0xc000b0a820) Stream removed, broadcasting: 3\nI0520 01:03:47.582229 3358 log.go:172] (0xc000944bb0) (0xc000b0a8c0) Stream removed, broadcasting: 5\n" May 20 01:03:47.584: INFO: stdout: "" May 20 01:03:47.585: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8993 execpod-affinity2mnqc -- /bin/sh -x -c nc -zv -t -w 2 10.103.107.133 80' May 20 01:03:47.803: INFO: stderr: "I0520 01:03:47.728802 3378 log.go:172] (0xc000998420) (0xc0006e4dc0) Create stream\nI0520 01:03:47.728870 3378 log.go:172] (0xc000998420) (0xc0006e4dc0) Stream added, broadcasting: 1\nI0520 01:03:47.733633 3378 log.go:172] (0xc000998420) Reply frame received for 1\nI0520 01:03:47.733669 3378 log.go:172] (0xc000998420) (0xc0006bfb80) Create stream\nI0520 01:03:47.733689 3378 log.go:172] (0xc000998420) (0xc0006bfb80) Stream added, broadcasting: 3\nI0520 01:03:47.734470 3378 log.go:172] (0xc000998420) Reply frame received for 3\nI0520 01:03:47.734511 3378 log.go:172] (0xc000998420) (0xc00058e140) Create stream\nI0520 01:03:47.734523 3378 log.go:172] (0xc000998420) (0xc00058e140) Stream added, broadcasting: 5\nI0520 01:03:47.735381 3378 log.go:172] (0xc000998420) Reply frame received for 5\nI0520 01:03:47.796834 3378 log.go:172] (0xc000998420) Data frame received for 5\nI0520 01:03:47.796878 3378 log.go:172] (0xc000998420) Data frame received for 3\nI0520 01:03:47.796916 3378 log.go:172] (0xc0006bfb80) (3) Data frame handling\nI0520 01:03:47.796948 3378 log.go:172] (0xc00058e140) (5) Data frame handling\nI0520 01:03:47.796966 3378 log.go:172] (0xc00058e140) (5) Data frame sent\nI0520 01:03:47.796980 3378 log.go:172] (0xc000998420) Data frame received for 5\nI0520 01:03:47.796993 3378 log.go:172] (0xc00058e140) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.107.133 80\nConnection to 10.103.107.133 80 port [tcp/http] succeeded!\nI0520 01:03:47.798758 3378 log.go:172] (0xc000998420) Data frame received for 1\nI0520 01:03:47.798779 3378 log.go:172] (0xc0006e4dc0) (1) Data frame handling\nI0520 01:03:47.798789 3378 log.go:172] (0xc0006e4dc0) (1) Data frame sent\nI0520 01:03:47.798815 3378 log.go:172] (0xc000998420) (0xc0006e4dc0) Stream removed, broadcasting: 1\nI0520 01:03:47.799065 3378 log.go:172] (0xc000998420) Go away received\nI0520 01:03:47.799201 3378 log.go:172] (0xc000998420) (0xc0006e4dc0) Stream removed, broadcasting: 1\nI0520 01:03:47.799225 3378 log.go:172] (0xc000998420) (0xc0006bfb80) Stream removed, broadcasting: 3\nI0520 01:03:47.799239 3378 log.go:172] (0xc000998420) (0xc00058e140) Stream removed, broadcasting: 5\n" May 20 01:03:47.803: INFO: stdout: "" May 20 01:03:47.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8993 execpod-affinity2mnqc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.103.107.133:80/ ; done' May 20 01:03:48.112: INFO: stderr: "I0520 01:03:47.938993 3398 log.go:172] (0xc000a011e0) (0xc0006c7040) Create stream\nI0520 01:03:47.939054 3398 log.go:172] (0xc000a011e0) (0xc0006c7040) Stream added, broadcasting: 1\nI0520 01:03:47.941765 3398 log.go:172] (0xc000a011e0) Reply frame received for 1\nI0520 01:03:47.941827 3398 log.go:172] (0xc000a011e0) (0xc000528b40) Create stream\nI0520 01:03:47.941845 3398 log.go:172] (0xc000a011e0) (0xc000528b40) Stream added, broadcasting: 3\nI0520 01:03:47.942789 3398 log.go:172] (0xc000a011e0) Reply frame received for 3\nI0520 01:03:47.942824 3398 log.go:172] (0xc000a011e0) (0xc0003ddae0) Create stream\nI0520 01:03:47.942838 3398 log.go:172] (0xc000a011e0) (0xc0003ddae0) Stream added, broadcasting: 5\nI0520 01:03:47.943678 3398 log.go:172] (0xc000a011e0) Reply frame received for 5\nI0520 01:03:48.014361 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.014395 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.014408 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.014433 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.014444 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\nI0520 01:03:48.014459 3398 log.go:172] (0xc0003ddae0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.107.133:80/\nI0520 01:03:48.023139 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.023166 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.023190 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.023972 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.023993 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\nI0520 01:03:48.024002 3398 log.go:172] (0xc0003ddae0) (5) Data frame sent\nI0520 01:03:48.024009 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.024015 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.107.133:80/\nI0520 01:03:48.024030 3398 log.go:172] (0xc0003ddae0) (5) Data frame sent\nI0520 01:03:48.024195 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.024220 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.024260 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.030808 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.030838 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.030868 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.031459 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.031470 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\nI0520 01:03:48.031476 3398 log.go:172] (0xc0003ddae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0520 01:03:48.031538 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.031567 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.031578 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.031591 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.031598 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\nI0520 01:03:48.031605 3398 log.go:172] (0xc0003ddae0) (5) Data frame sent\n http://10.103.107.133:80/\nI0520 01:03:48.038473 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.038486 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.038504 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.039325 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.039339 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.039362 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.039377 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.039383 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\nI0520 01:03:48.039389 3398 log.go:172] (0xc0003ddae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.107.133:80/\nI0520 01:03:48.050499 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.050529 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.050539 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.050945 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.051008 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.051024 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.051037 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.051043 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\nI0520 01:03:48.051052 3398 log.go:172] (0xc0003ddae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.107.133:80/\nI0520 01:03:48.054589 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.054601 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.054624 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.054924 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.054936 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.054952 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.054971 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.054995 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\nI0520 01:03:48.055010 3398 log.go:172] (0xc0003ddae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.107.133:80/\nI0520 01:03:48.058050 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.058066 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.058077 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.058448 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.058466 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.058476 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.058488 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\nI0520 01:03:48.058497 3398 log.go:172] (0xc0003ddae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.107.133:80/\nI0520 01:03:48.058506 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.062661 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.062675 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.062686 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.062940 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.062951 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.062963 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.062976 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.062986 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\nI0520 01:03:48.062993 3398 log.go:172] (0xc0003ddae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.107.133:80/\nI0520 01:03:48.066308 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.066322 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.066343 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.066682 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.066702 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.066718 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.066737 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.066745 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\nI0520 01:03:48.066754 3398 log.go:172] (0xc0003ddae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.107.133:80/\nI0520 01:03:48.070085 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.070106 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.070121 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.070373 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.070387 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.070404 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.070419 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.070436 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\nI0520 01:03:48.070451 3398 log.go:172] (0xc0003ddae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.107.133:80/\nI0520 01:03:48.074814 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.074834 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.074851 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.075299 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.075328 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.075342 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.075357 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.075365 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\nI0520 01:03:48.075375 3398 log.go:172] (0xc0003ddae0) (5) Data frame sent\nI0520 01:03:48.075385 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.075393 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.107.133:80/\nI0520 01:03:48.075457 3398 log.go:172] (0xc0003ddae0) (5) Data frame sent\nI0520 01:03:48.080341 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.080375 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.080403 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.080775 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.080791 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.080799 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.080835 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.080866 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\nI0520 01:03:48.080893 3398 log.go:172] (0xc0003ddae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.107.133:80/\nI0520 01:03:48.085875 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.085888 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.085905 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.086423 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.086455 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.086468 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.086491 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.086524 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\nI0520 01:03:48.086553 3398 log.go:172] (0xc0003ddae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.107.133:80/\nI0520 01:03:48.090644 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.090661 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.090669 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.090741 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.090751 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.090758 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.090771 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.090778 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\nI0520 01:03:48.090784 3398 log.go:172] (0xc0003ddae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.107.133:80/\nI0520 01:03:48.096668 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.096693 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.096728 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.097453 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.097482 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.097502 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.097538 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\nI0520 01:03:48.097562 3398 log.go:172] (0xc0003ddae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.107.133:80/\nI0520 01:03:48.097582 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.100993 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.101005 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.101015 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.101492 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.101511 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\nI0520 01:03:48.101518 3398 log.go:172] (0xc0003ddae0) (5) Data frame sent\nI0520 01:03:48.101523 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.101528 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.107.133:80/\nI0520 01:03:48.101540 3398 log.go:172] (0xc0003ddae0) (5) Data frame sent\nI0520 01:03:48.101549 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.101554 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.101564 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.104944 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.104953 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.104964 3398 log.go:172] (0xc000528b40) (3) Data frame sent\nI0520 01:03:48.106073 3398 log.go:172] (0xc000a011e0) Data frame received for 5\nI0520 01:03:48.106093 3398 log.go:172] (0xc0003ddae0) (5) Data frame handling\nI0520 01:03:48.106107 3398 log.go:172] (0xc000a011e0) Data frame received for 3\nI0520 01:03:48.106122 3398 log.go:172] (0xc000528b40) (3) Data frame handling\nI0520 01:03:48.107753 3398 log.go:172] (0xc000a011e0) Data frame received for 1\nI0520 01:03:48.107776 3398 log.go:172] (0xc0006c7040) (1) Data frame handling\nI0520 01:03:48.107790 3398 log.go:172] (0xc0006c7040) (1) Data frame sent\nI0520 01:03:48.107816 3398 log.go:172] (0xc000a011e0) (0xc0006c7040) Stream removed, broadcasting: 1\nI0520 01:03:48.107846 3398 log.go:172] (0xc000a011e0) Go away received\nI0520 01:03:48.108119 3398 log.go:172] (0xc000a011e0) (0xc0006c7040) Stream removed, broadcasting: 1\nI0520 01:03:48.108134 3398 log.go:172] (0xc000a011e0) (0xc000528b40) Stream removed, broadcasting: 3\nI0520 01:03:48.108141 3398 log.go:172] (0xc000a011e0) (0xc0003ddae0) Stream removed, broadcasting: 5\n" May 20 01:03:48.113: INFO: stdout: "\naffinity-clusterip-5zp2d\naffinity-clusterip-5zp2d\naffinity-clusterip-5zp2d\naffinity-clusterip-5zp2d\naffinity-clusterip-5zp2d\naffinity-clusterip-5zp2d\naffinity-clusterip-5zp2d\naffinity-clusterip-5zp2d\naffinity-clusterip-5zp2d\naffinity-clusterip-5zp2d\naffinity-clusterip-5zp2d\naffinity-clusterip-5zp2d\naffinity-clusterip-5zp2d\naffinity-clusterip-5zp2d\naffinity-clusterip-5zp2d\naffinity-clusterip-5zp2d" May 20 01:03:48.113: INFO: Received response from host: May 20 01:03:48.113: INFO: Received response from host: affinity-clusterip-5zp2d May 20 01:03:48.113: INFO: Received response from host: affinity-clusterip-5zp2d May 20 01:03:48.113: INFO: Received response from host: affinity-clusterip-5zp2d May 20 01:03:48.113: INFO: Received response from host: affinity-clusterip-5zp2d May 20 01:03:48.113: INFO: Received response from host: affinity-clusterip-5zp2d May 20 01:03:48.113: INFO: Received response from host: affinity-clusterip-5zp2d May 20 01:03:48.113: INFO: Received response from host: affinity-clusterip-5zp2d May 20 01:03:48.113: INFO: Received response from host: affinity-clusterip-5zp2d May 20 01:03:48.113: INFO: Received response from host: affinity-clusterip-5zp2d May 20 01:03:48.113: INFO: Received response from host: affinity-clusterip-5zp2d May 20 01:03:48.113: INFO: Received response from host: affinity-clusterip-5zp2d May 20 01:03:48.113: INFO: Received response from host: affinity-clusterip-5zp2d May 20 01:03:48.113: INFO: Received response from host: affinity-clusterip-5zp2d May 20 01:03:48.113: INFO: Received response from host: affinity-clusterip-5zp2d May 20 01:03:48.113: INFO: Received response from host: affinity-clusterip-5zp2d May 20 01:03:48.113: INFO: Received response from host: affinity-clusterip-5zp2d May 20 01:03:48.113: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-8993, will wait for the garbage collector to delete the pods May 20 01:03:48.577: INFO: Deleting ReplicationController affinity-clusterip took: 220.368617ms May 20 01:03:48.878: INFO: Terminating ReplicationController affinity-clusterip pods took: 300.28006ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:04:04.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8993" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:28.799 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":262,"skipped":4394,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:04:04.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 20 01:04:05.009: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7cc84787-adb4-42cf-a252-f8716f3305d0" in namespace "downward-api-2978" to be "Succeeded or Failed" May 20 01:04:05.021: INFO: Pod "downwardapi-volume-7cc84787-adb4-42cf-a252-f8716f3305d0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.01832ms May 20 01:04:07.025: INFO: Pod "downwardapi-volume-7cc84787-adb4-42cf-a252-f8716f3305d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015878104s May 20 01:04:09.028: INFO: Pod "downwardapi-volume-7cc84787-adb4-42cf-a252-f8716f3305d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018810908s STEP: Saw pod success May 20 01:04:09.028: INFO: Pod "downwardapi-volume-7cc84787-adb4-42cf-a252-f8716f3305d0" satisfied condition "Succeeded or Failed" May 20 01:04:09.030: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-7cc84787-adb4-42cf-a252-f8716f3305d0 container client-container: STEP: delete the pod May 20 01:04:09.059: INFO: Waiting for pod downwardapi-volume-7cc84787-adb4-42cf-a252-f8716f3305d0 to disappear May 20 01:04:09.069: INFO: Pod downwardapi-volume-7cc84787-adb4-42cf-a252-f8716f3305d0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:04:09.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2978" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":263,"skipped":4395,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:04:09.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-6b928f20-2417-4d33-b939-34080ac6fbca STEP: Creating a pod to test consume configMaps May 20 01:04:09.173: INFO: Waiting up to 5m0s for pod "pod-configmaps-e138190d-5737-4ba5-b3dd-121e18906e70" in namespace "configmap-9953" to be "Succeeded or Failed" May 20 01:04:09.397: INFO: Pod "pod-configmaps-e138190d-5737-4ba5-b3dd-121e18906e70": Phase="Pending", Reason="", readiness=false. Elapsed: 224.927939ms May 20 01:04:11.401: INFO: Pod "pod-configmaps-e138190d-5737-4ba5-b3dd-121e18906e70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228460631s May 20 01:04:13.405: INFO: Pod "pod-configmaps-e138190d-5737-4ba5-b3dd-121e18906e70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.232869659s STEP: Saw pod success May 20 01:04:13.405: INFO: Pod "pod-configmaps-e138190d-5737-4ba5-b3dd-121e18906e70" satisfied condition "Succeeded or Failed" May 20 01:04:13.408: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-e138190d-5737-4ba5-b3dd-121e18906e70 container configmap-volume-test: STEP: delete the pod May 20 01:04:13.445: INFO: Waiting for pod pod-configmaps-e138190d-5737-4ba5-b3dd-121e18906e70 to disappear May 20 01:04:13.458: INFO: Pod pod-configmaps-e138190d-5737-4ba5-b3dd-121e18906e70 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:04:13.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9953" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":264,"skipped":4396,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:04:13.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 01:04:13.560: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 20 01:04:13.565: INFO: Number of nodes with available pods: 0 May 20 01:04:13.565: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 20 01:04:13.635: INFO: Number of nodes with available pods: 0 May 20 01:04:13.635: INFO: Node latest-worker is running more than one daemon pod May 20 01:04:14.640: INFO: Number of nodes with available pods: 0 May 20 01:04:14.640: INFO: Node latest-worker is running more than one daemon pod May 20 01:04:15.640: INFO: Number of nodes with available pods: 0 May 20 01:04:15.640: INFO: Node latest-worker is running more than one daemon pod May 20 01:04:16.640: INFO: Number of nodes with available pods: 0 May 20 01:04:16.640: INFO: Node latest-worker is running more than one daemon pod May 20 01:04:17.640: INFO: Number of nodes with available pods: 1 May 20 01:04:17.640: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 20 01:04:17.671: INFO: Number of nodes with available pods: 1 May 20 01:04:17.671: INFO: Number of running nodes: 0, number of available pods: 1 May 20 01:04:18.675: INFO: Number of nodes with available pods: 0 May 20 01:04:18.676: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 20 01:04:18.694: INFO: Number of nodes with available pods: 0 May 20 01:04:18.694: INFO: Node latest-worker is running more than one daemon pod May 20 01:04:19.699: INFO: Number of nodes with available pods: 0 May 20 01:04:19.699: INFO: Node latest-worker is running more than one daemon pod May 20 01:04:20.705: INFO: Number of nodes with available pods: 0 May 20 01:04:20.705: INFO: Node latest-worker is running more than one daemon pod May 20 01:04:21.698: INFO: Number of nodes with available pods: 0 May 20 01:04:21.698: INFO: Node latest-worker is running more than one daemon pod May 20 01:04:22.699: INFO: Number of nodes with available pods: 0 May 20 01:04:22.699: INFO: Node latest-worker is running more than one daemon pod May 20 01:04:23.699: INFO: Number of nodes with available pods: 0 May 20 01:04:23.699: INFO: Node latest-worker is running more than one daemon pod May 20 01:04:24.698: INFO: Number of nodes with available pods: 0 May 20 01:04:24.698: INFO: Node latest-worker is running more than one daemon pod May 20 01:04:25.699: INFO: Number of nodes with available pods: 0 May 20 01:04:25.699: INFO: Node latest-worker is running more than one daemon pod May 20 01:04:26.698: INFO: Number of nodes with available pods: 0 May 20 01:04:26.698: INFO: Node latest-worker is running more than one daemon pod May 20 01:04:27.698: INFO: Number of nodes with available pods: 0 May 20 01:04:27.698: INFO: Node latest-worker is running more than one daemon pod May 20 01:04:28.698: INFO: Number of nodes with available pods: 1 May 20 01:04:28.698: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5852, will wait for the garbage collector to delete the pods May 20 01:04:28.794: INFO: Deleting DaemonSet.extensions daemon-set took: 37.963906ms May 20 01:04:28.895: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.195495ms May 20 01:04:32.998: INFO: Number of nodes with available pods: 0 May 20 01:04:32.998: INFO: Number of running nodes: 0, number of available pods: 0 May 20 01:04:33.001: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5852/daemonsets","resourceVersion":"6102572"},"items":null} May 20 01:04:33.003: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5852/pods","resourceVersion":"6102572"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:04:33.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5852" for this suite. • [SLOW TEST:19.588 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":288,"completed":265,"skipped":4408,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:04:33.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 20 01:04:41.221: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 01:04:41.244: INFO: Pod pod-with-prestop-http-hook still exists May 20 01:04:43.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 01:04:43.248: INFO: Pod pod-with-prestop-http-hook still exists May 20 01:04:45.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 01:04:45.249: INFO: Pod pod-with-prestop-http-hook still exists May 20 01:04:47.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 01:04:47.250: INFO: Pod pod-with-prestop-http-hook still exists May 20 01:04:49.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 01:04:49.260: INFO: Pod pod-with-prestop-http-hook still exists May 20 01:04:51.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 01:04:51.248: INFO: Pod pod-with-prestop-http-hook still exists May 20 01:04:53.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 01:04:53.248: INFO: Pod pod-with-prestop-http-hook still exists May 20 01:04:55.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 01:04:55.249: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:04:55.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-471" for this suite. • [SLOW TEST:22.207 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":288,"completed":266,"skipped":4424,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:04:55.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:04:55.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9015" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":288,"completed":267,"skipped":4427,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:04:55.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-4378 STEP: creating service affinity-nodeport-transition in namespace services-4378 STEP: creating replication controller affinity-nodeport-transition in namespace services-4378 I0520 01:04:55.527476 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-4378, replica count: 3 I0520 01:04:58.578213 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 01:05:01.578532 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 01:05:01.589: INFO: Creating new exec pod May 20 01:05:06.611: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4378 execpod-affinity6nc2v -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' May 20 01:05:06.838: INFO: stderr: "I0520 01:05:06.744271 3418 log.go:172] (0xc000419b80) (0xc0005550e0) Create stream\nI0520 01:05:06.744331 3418 log.go:172] (0xc000419b80) (0xc0005550e0) Stream added, broadcasting: 1\nI0520 01:05:06.746893 3418 log.go:172] (0xc000419b80) Reply frame received for 1\nI0520 01:05:06.746966 3418 log.go:172] (0xc000419b80) (0xc0004f8c80) Create stream\nI0520 01:05:06.746990 3418 log.go:172] (0xc000419b80) (0xc0004f8c80) Stream added, broadcasting: 3\nI0520 01:05:06.748003 3418 log.go:172] (0xc000419b80) Reply frame received for 3\nI0520 01:05:06.748062 3418 log.go:172] (0xc000419b80) (0xc00033adc0) Create stream\nI0520 01:05:06.748096 3418 log.go:172] (0xc000419b80) (0xc00033adc0) Stream added, broadcasting: 5\nI0520 01:05:06.749464 3418 log.go:172] (0xc000419b80) Reply frame received for 5\nI0520 01:05:06.831550 3418 log.go:172] (0xc000419b80) Data frame received for 5\nI0520 01:05:06.831587 3418 log.go:172] (0xc00033adc0) (5) Data frame handling\nI0520 01:05:06.831613 3418 log.go:172] (0xc00033adc0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0520 01:05:06.831796 3418 log.go:172] (0xc000419b80) Data frame received for 5\nI0520 01:05:06.831824 3418 log.go:172] (0xc00033adc0) (5) Data frame handling\nI0520 01:05:06.831963 3418 log.go:172] (0xc000419b80) Data frame received for 3\nI0520 01:05:06.831993 3418 log.go:172] (0xc0004f8c80) (3) Data frame handling\nI0520 01:05:06.834089 3418 log.go:172] (0xc000419b80) Data frame received for 1\nI0520 01:05:06.834110 3418 log.go:172] (0xc0005550e0) (1) Data frame handling\nI0520 01:05:06.834133 3418 log.go:172] (0xc0005550e0) (1) Data frame sent\nI0520 01:05:06.834146 3418 log.go:172] (0xc000419b80) (0xc0005550e0) Stream removed, broadcasting: 1\nI0520 01:05:06.834267 3418 log.go:172] (0xc000419b80) Go away received\nI0520 01:05:06.834466 3418 log.go:172] (0xc000419b80) (0xc0005550e0) Stream removed, broadcasting: 1\nI0520 01:05:06.834482 3418 log.go:172] (0xc000419b80) (0xc0004f8c80) Stream removed, broadcasting: 3\nI0520 01:05:06.834490 3418 log.go:172] (0xc000419b80) (0xc00033adc0) Stream removed, broadcasting: 5\n" May 20 01:05:06.838: INFO: stdout: "" May 20 01:05:06.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4378 execpod-affinity6nc2v -- /bin/sh -x -c nc -zv -t -w 2 10.103.226.115 80' May 20 01:05:07.059: INFO: stderr: "I0520 01:05:06.973042 3440 log.go:172] (0xc00003a420) (0xc000307400) Create stream\nI0520 01:05:06.973097 3440 log.go:172] (0xc00003a420) (0xc000307400) Stream added, broadcasting: 1\nI0520 01:05:06.975521 3440 log.go:172] (0xc00003a420) Reply frame received for 1\nI0520 01:05:06.975552 3440 log.go:172] (0xc00003a420) (0xc000307f40) Create stream\nI0520 01:05:06.975564 3440 log.go:172] (0xc00003a420) (0xc000307f40) Stream added, broadcasting: 3\nI0520 01:05:06.976365 3440 log.go:172] (0xc00003a420) Reply frame received for 3\nI0520 01:05:06.976397 3440 log.go:172] (0xc00003a420) (0xc0001375e0) Create stream\nI0520 01:05:06.976409 3440 log.go:172] (0xc00003a420) (0xc0001375e0) Stream added, broadcasting: 5\nI0520 01:05:06.977612 3440 log.go:172] (0xc00003a420) Reply frame received for 5\nI0520 01:05:07.050989 3440 log.go:172] (0xc00003a420) Data frame received for 5\nI0520 01:05:07.051023 3440 log.go:172] (0xc0001375e0) (5) Data frame handling\nI0520 01:05:07.051042 3440 log.go:172] (0xc0001375e0) (5) Data frame sent\nI0520 01:05:07.051054 3440 log.go:172] (0xc00003a420) Data frame received for 5\nI0520 01:05:07.051063 3440 log.go:172] (0xc0001375e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.226.115 80\nConnection to 10.103.226.115 80 port [tcp/http] succeeded!\nI0520 01:05:07.051096 3440 log.go:172] (0xc00003a420) Data frame received for 3\nI0520 01:05:07.051106 3440 log.go:172] (0xc000307f40) (3) Data frame handling\nI0520 01:05:07.052210 3440 log.go:172] (0xc00003a420) Data frame received for 1\nI0520 01:05:07.052242 3440 log.go:172] (0xc000307400) (1) Data frame handling\nI0520 01:05:07.052273 3440 log.go:172] (0xc000307400) (1) Data frame sent\nI0520 01:05:07.052288 3440 log.go:172] (0xc00003a420) (0xc000307400) Stream removed, broadcasting: 1\nI0520 01:05:07.052312 3440 log.go:172] (0xc00003a420) Go away received\nI0520 01:05:07.052837 3440 log.go:172] (0xc00003a420) (0xc000307400) Stream removed, broadcasting: 1\nI0520 01:05:07.052860 3440 log.go:172] (0xc00003a420) (0xc000307f40) Stream removed, broadcasting: 3\nI0520 01:05:07.052873 3440 log.go:172] (0xc00003a420) (0xc0001375e0) Stream removed, broadcasting: 5\n" May 20 01:05:07.059: INFO: stdout: "" May 20 01:05:07.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4378 execpod-affinity6nc2v -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30476' May 20 01:05:07.268: INFO: stderr: "I0520 01:05:07.202927 3460 log.go:172] (0xc0004de000) (0xc00024f180) Create stream\nI0520 01:05:07.203000 3460 log.go:172] (0xc0004de000) (0xc00024f180) Stream added, broadcasting: 1\nI0520 01:05:07.204944 3460 log.go:172] (0xc0004de000) Reply frame received for 1\nI0520 01:05:07.205019 3460 log.go:172] (0xc0004de000) (0xc000b00000) Create stream\nI0520 01:05:07.205048 3460 log.go:172] (0xc0004de000) (0xc000b00000) Stream added, broadcasting: 3\nI0520 01:05:07.206361 3460 log.go:172] (0xc0004de000) Reply frame received for 3\nI0520 01:05:07.206406 3460 log.go:172] (0xc0004de000) (0xc000b000a0) Create stream\nI0520 01:05:07.206426 3460 log.go:172] (0xc0004de000) (0xc000b000a0) Stream added, broadcasting: 5\nI0520 01:05:07.207470 3460 log.go:172] (0xc0004de000) Reply frame received for 5\nI0520 01:05:07.260440 3460 log.go:172] (0xc0004de000) Data frame received for 5\nI0520 01:05:07.260479 3460 log.go:172] (0xc000b000a0) (5) Data frame handling\nI0520 01:05:07.260503 3460 log.go:172] (0xc000b000a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 30476\nConnection to 172.17.0.13 30476 port [tcp/30476] succeeded!\nI0520 01:05:07.260568 3460 log.go:172] (0xc0004de000) Data frame received for 3\nI0520 01:05:07.260589 3460 log.go:172] (0xc000b00000) (3) Data frame handling\nI0520 01:05:07.261334 3460 log.go:172] (0xc0004de000) Data frame received for 5\nI0520 01:05:07.261357 3460 log.go:172] (0xc000b000a0) (5) Data frame handling\nI0520 01:05:07.262861 3460 log.go:172] (0xc0004de000) Data frame received for 1\nI0520 01:05:07.262886 3460 log.go:172] (0xc00024f180) (1) Data frame handling\nI0520 01:05:07.262908 3460 log.go:172] (0xc00024f180) (1) Data frame sent\nI0520 01:05:07.262920 3460 log.go:172] (0xc0004de000) (0xc00024f180) Stream removed, broadcasting: 1\nI0520 01:05:07.262934 3460 log.go:172] (0xc0004de000) Go away received\nI0520 01:05:07.263272 3460 log.go:172] (0xc0004de000) (0xc00024f180) Stream removed, broadcasting: 1\nI0520 01:05:07.263289 3460 log.go:172] (0xc0004de000) (0xc000b00000) Stream removed, broadcasting: 3\nI0520 01:05:07.263296 3460 log.go:172] (0xc0004de000) (0xc000b000a0) Stream removed, broadcasting: 5\n" May 20 01:05:07.268: INFO: stdout: "" May 20 01:05:07.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4378 execpod-affinity6nc2v -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30476' May 20 01:05:07.546: INFO: stderr: "I0520 01:05:07.414410 3480 log.go:172] (0xc000a0f080) (0xc0006ecf00) Create stream\nI0520 01:05:07.414468 3480 log.go:172] (0xc000a0f080) (0xc0006ecf00) Stream added, broadcasting: 1\nI0520 01:05:07.417713 3480 log.go:172] (0xc000a0f080) Reply frame received for 1\nI0520 01:05:07.417761 3480 log.go:172] (0xc000a0f080) (0xc0005405a0) Create stream\nI0520 01:05:07.417776 3480 log.go:172] (0xc000a0f080) (0xc0005405a0) Stream added, broadcasting: 3\nI0520 01:05:07.418579 3480 log.go:172] (0xc000a0f080) Reply frame received for 3\nI0520 01:05:07.418607 3480 log.go:172] (0xc000a0f080) (0xc0006dd540) Create stream\nI0520 01:05:07.418634 3480 log.go:172] (0xc000a0f080) (0xc0006dd540) Stream added, broadcasting: 5\nI0520 01:05:07.419676 3480 log.go:172] (0xc000a0f080) Reply frame received for 5\nI0520 01:05:07.539556 3480 log.go:172] (0xc000a0f080) Data frame received for 5\nI0520 01:05:07.539592 3480 log.go:172] (0xc0006dd540) (5) Data frame handling\nI0520 01:05:07.539606 3480 log.go:172] (0xc0006dd540) (5) Data frame sent\nI0520 01:05:07.539616 3480 log.go:172] (0xc000a0f080) Data frame received for 5\nI0520 01:05:07.539626 3480 log.go:172] (0xc0006dd540) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30476\nConnection to 172.17.0.12 30476 port [tcp/30476] succeeded!\nI0520 01:05:07.539651 3480 log.go:172] (0xc000a0f080) Data frame received for 3\nI0520 01:05:07.539661 3480 log.go:172] (0xc0005405a0) (3) Data frame handling\nI0520 01:05:07.540786 3480 log.go:172] (0xc000a0f080) Data frame received for 1\nI0520 01:05:07.540823 3480 log.go:172] (0xc0006ecf00) (1) Data frame handling\nI0520 01:05:07.540856 3480 log.go:172] (0xc0006ecf00) (1) Data frame sent\nI0520 01:05:07.540879 3480 log.go:172] (0xc000a0f080) (0xc0006ecf00) Stream removed, broadcasting: 1\nI0520 01:05:07.540930 3480 log.go:172] (0xc000a0f080) Go away received\nI0520 01:05:07.541747 3480 log.go:172] (0xc000a0f080) (0xc0006ecf00) Stream removed, broadcasting: 1\nI0520 01:05:07.541781 3480 log.go:172] (0xc000a0f080) (0xc0005405a0) Stream removed, broadcasting: 3\nI0520 01:05:07.541824 3480 log.go:172] (0xc000a0f080) (0xc0006dd540) Stream removed, broadcasting: 5\n" May 20 01:05:07.546: INFO: stdout: "" May 20 01:05:07.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4378 execpod-affinity6nc2v -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30476/ ; done' May 20 01:05:07.849: INFO: stderr: "I0520 01:05:07.689862 3498 log.go:172] (0xc000540fd0) (0xc000a7c500) Create stream\nI0520 01:05:07.689903 3498 log.go:172] (0xc000540fd0) (0xc000a7c500) Stream added, broadcasting: 1\nI0520 01:05:07.698187 3498 log.go:172] (0xc000540fd0) Reply frame received for 1\nI0520 01:05:07.698225 3498 log.go:172] (0xc000540fd0) (0xc0006d8640) Create stream\nI0520 01:05:07.698234 3498 log.go:172] (0xc000540fd0) (0xc0006d8640) Stream added, broadcasting: 3\nI0520 01:05:07.703425 3498 log.go:172] (0xc000540fd0) Reply frame received for 3\nI0520 01:05:07.703463 3498 log.go:172] (0xc000540fd0) (0xc00047cdc0) Create stream\nI0520 01:05:07.703471 3498 log.go:172] (0xc000540fd0) (0xc00047cdc0) Stream added, broadcasting: 5\nI0520 01:05:07.707967 3498 log.go:172] (0xc000540fd0) Reply frame received for 5\nI0520 01:05:07.762935 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.762970 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.762986 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.763008 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.763018 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\nI0520 01:05:07.763028 3498 log.go:172] (0xc00047cdc0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:07.766426 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.766446 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.766462 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.766728 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.766746 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.766753 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.766765 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.766770 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\nI0520 01:05:07.766775 3498 log.go:172] (0xc00047cdc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:07.770234 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.770249 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.770260 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.770553 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.770574 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.770582 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.770593 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.770608 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\nI0520 01:05:07.770626 3498 log.go:172] (0xc00047cdc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:07.774850 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.774868 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.774884 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.775194 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.775214 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.775231 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.775251 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.775258 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\nI0520 01:05:07.775266 3498 log.go:172] (0xc00047cdc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:07.779054 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.779063 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.779068 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.779731 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.779776 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.779812 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.779827 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.779838 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\nI0520 01:05:07.779854 3498 log.go:172] (0xc00047cdc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:07.783288 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.783298 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.783305 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.783645 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.783653 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.783658 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.783663 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.783667 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\nI0520 01:05:07.783675 3498 log.go:172] (0xc00047cdc0) (5) Data frame sent\nI0520 01:05:07.783679 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.783683 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:07.783696 3498 log.go:172] (0xc00047cdc0) (5) Data frame sent\nI0520 01:05:07.791324 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.791344 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.791357 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.792063 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.792078 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\nI0520 01:05:07.792086 3498 log.go:172] (0xc00047cdc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:07.792223 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.792241 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.792252 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.796825 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.796843 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.796863 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.797421 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.797439 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\nI0520 01:05:07.797446 3498 log.go:172] (0xc00047cdc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:07.797455 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.797461 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.797466 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.802219 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.802231 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.802236 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.802766 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.802787 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\nI0520 01:05:07.802796 3498 log.go:172] (0xc00047cdc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:07.802806 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.802813 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.802819 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.806666 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.806685 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.806699 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.807050 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.807084 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\nI0520 01:05:07.807102 3498 log.go:172] (0xc00047cdc0) (5) Data frame sent\nI0520 01:05:07.807113 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.807122 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:07.807139 3498 log.go:172] (0xc00047cdc0) (5) Data frame sent\nI0520 01:05:07.807149 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.807165 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.807190 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.811434 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.811452 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.811477 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.811779 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.811795 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\nI0520 01:05:07.811805 3498 log.go:172] (0xc00047cdc0) (5) Data frame sent\nI0520 01:05:07.811812 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.811817 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:07.811830 3498 log.go:172] (0xc00047cdc0) (5) Data frame sent\nI0520 01:05:07.811905 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.811925 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.811940 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.815015 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.815028 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.815035 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.815303 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.815319 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.815338 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.815356 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.815367 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\nI0520 01:05:07.815384 3498 log.go:172] (0xc00047cdc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:07.819517 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.819533 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.819560 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.820061 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.820079 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\n+ echo\nI0520 01:05:07.820094 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.820120 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.820134 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.820153 3498 log.go:172] (0xc00047cdc0) (5) Data frame sent\nI0520 01:05:07.820174 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.820187 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\nI0520 01:05:07.820205 3498 log.go:172] (0xc00047cdc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:07.827522 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.827545 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.827561 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.828243 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.828263 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.828277 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.828303 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.828354 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\nI0520 01:05:07.828399 3498 log.go:172] (0xc00047cdc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:07.833098 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.833248 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.833261 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.833695 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.833715 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\nI0520 01:05:07.833729 3498 log.go:172] (0xc00047cdc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:07.833749 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.833772 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.833793 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.838817 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.838839 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.838859 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.839381 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.839395 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\nI0520 01:05:07.839414 3498 log.go:172] (0xc00047cdc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:07.839478 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.839543 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.839564 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.843300 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.843318 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.843335 3498 log.go:172] (0xc0006d8640) (3) Data frame sent\nI0520 01:05:07.843711 3498 log.go:172] (0xc000540fd0) Data frame received for 3\nI0520 01:05:07.843724 3498 log.go:172] (0xc0006d8640) (3) Data frame handling\nI0520 01:05:07.843793 3498 log.go:172] (0xc000540fd0) Data frame received for 5\nI0520 01:05:07.843805 3498 log.go:172] (0xc00047cdc0) (5) Data frame handling\nI0520 01:05:07.845657 3498 log.go:172] (0xc000540fd0) Data frame received for 1\nI0520 01:05:07.845672 3498 log.go:172] (0xc000a7c500) (1) Data frame handling\nI0520 01:05:07.845680 3498 log.go:172] (0xc000a7c500) (1) Data frame sent\nI0520 01:05:07.845690 3498 log.go:172] (0xc000540fd0) (0xc000a7c500) Stream removed, broadcasting: 1\nI0520 01:05:07.845708 3498 log.go:172] (0xc000540fd0) Go away received\nI0520 01:05:07.845973 3498 log.go:172] (0xc000540fd0) (0xc000a7c500) Stream removed, broadcasting: 1\nI0520 01:05:07.845988 3498 log.go:172] (0xc000540fd0) (0xc0006d8640) Stream removed, broadcasting: 3\nI0520 01:05:07.845996 3498 log.go:172] (0xc000540fd0) (0xc00047cdc0) Stream removed, broadcasting: 5\n" May 20 01:05:07.850: INFO: stdout: "\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-tblfp\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-tblfp\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-46cx6\naffinity-nodeport-transition-46cx6\naffinity-nodeport-transition-tblfp\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-46cx6\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-46cx6\naffinity-nodeport-transition-splp8" May 20 01:05:07.850: INFO: Received response from host: May 20 01:05:07.850: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:07.850: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:07.850: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:07.850: INFO: Received response from host: affinity-nodeport-transition-tblfp May 20 01:05:07.850: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:07.850: INFO: Received response from host: affinity-nodeport-transition-tblfp May 20 01:05:07.850: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:07.850: INFO: Received response from host: affinity-nodeport-transition-46cx6 May 20 01:05:07.850: INFO: Received response from host: affinity-nodeport-transition-46cx6 May 20 01:05:07.850: INFO: Received response from host: affinity-nodeport-transition-tblfp May 20 01:05:07.850: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:07.850: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:07.850: INFO: Received response from host: affinity-nodeport-transition-46cx6 May 20 01:05:07.850: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:07.850: INFO: Received response from host: affinity-nodeport-transition-46cx6 May 20 01:05:07.850: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:07.858: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4378 execpod-affinity6nc2v -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30476/ ; done' May 20 01:05:08.254: INFO: stderr: "I0520 01:05:08.062837 3518 log.go:172] (0xc000a080b0) (0xc0008dc640) Create stream\nI0520 01:05:08.062929 3518 log.go:172] (0xc000a080b0) (0xc0008dc640) Stream added, broadcasting: 1\nI0520 01:05:08.064731 3518 log.go:172] (0xc000a080b0) Reply frame received for 1\nI0520 01:05:08.064784 3518 log.go:172] (0xc000a080b0) (0xc0008d43c0) Create stream\nI0520 01:05:08.064797 3518 log.go:172] (0xc000a080b0) (0xc0008d43c0) Stream added, broadcasting: 3\nI0520 01:05:08.065945 3518 log.go:172] (0xc000a080b0) Reply frame received for 3\nI0520 01:05:08.066013 3518 log.go:172] (0xc000a080b0) (0xc0008dcbe0) Create stream\nI0520 01:05:08.066041 3518 log.go:172] (0xc000a080b0) (0xc0008dcbe0) Stream added, broadcasting: 5\nI0520 01:05:08.067238 3518 log.go:172] (0xc000a080b0) Reply frame received for 5\nI0520 01:05:08.162018 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.162083 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\nI0520 01:05:08.162107 3518 log.go:172] (0xc0008dcbe0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:08.162187 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.162245 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.162273 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.166390 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.166425 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.166438 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.166889 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.166910 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\nI0520 01:05:08.166922 3518 log.go:172] (0xc0008dcbe0) (5) Data frame sent\n+ echo\nI0520 01:05:08.167299 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.167316 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\nI0520 01:05:08.167327 3518 log.go:172] (0xc0008dcbe0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:08.167672 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.167692 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.167704 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.174568 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.174594 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.174625 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.175075 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.175088 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\nI0520 01:05:08.175100 3518 log.go:172] (0xc0008dcbe0) (5) Data frame sent\nI0520 01:05:08.175105 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.175111 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:08.175176 3518 log.go:172] (0xc0008dcbe0) (5) Data frame sent\nI0520 01:05:08.175246 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.175257 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.175263 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.180348 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.180372 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.180390 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.180677 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.180698 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\nI0520 01:05:08.180716 3518 log.go:172] (0xc0008dcbe0) (5) Data frame sent\n+ echo\nI0520 01:05:08.180733 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.180757 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.180787 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.180801 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.180810 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\nI0520 01:05:08.180816 3518 log.go:172] (0xc0008dcbe0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:08.185365 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.185433 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.185460 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.185857 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.185873 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.185879 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.185887 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.185892 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\nI0520 01:05:08.185896 3518 log.go:172] (0xc0008dcbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:08.189311 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.189330 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.189350 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.189997 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.190018 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\nI0520 01:05:08.190026 3518 log.go:172] (0xc0008dcbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:08.190039 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.190044 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.190050 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.193816 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.193830 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.193854 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.194220 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.194237 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\nI0520 01:05:08.194251 3518 log.go:172] (0xc0008dcbe0) (5) Data frame sent\n+ I0520 01:05:08.194423 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.194439 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\nI0520 01:05:08.194450 3518 log.go:172] (0xc0008dcbe0) (5) Data frame sent\necho\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:08.194549 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.194563 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.194572 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.197922 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.197936 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.197949 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.198293 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.198320 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.198344 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\nI0520 01:05:08.198413 3518 log.go:172] (0xc0008dcbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:08.198437 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.198463 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.203332 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.203347 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.203360 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.204015 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.204039 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.204050 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.204063 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.204070 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\nI0520 01:05:08.204079 3518 log.go:172] (0xc0008dcbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:08.207741 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.207765 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.207780 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.208119 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.208145 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.208173 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.208212 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.208234 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\nI0520 01:05:08.208249 3518 log.go:172] (0xc0008dcbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:08.212000 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.212014 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.212020 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.212427 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.212439 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\nI0520 01:05:08.212446 3518 log.go:172] (0xc0008dcbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:08.212566 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.212582 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.212591 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.218242 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.218259 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.218272 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.219130 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.219160 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\nI0520 01:05:08.219176 3518 log.go:172] (0xc0008dcbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:08.219374 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.219389 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.219402 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.222305 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.222321 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.222334 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.222611 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.222628 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.222648 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.222665 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.222687 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\nI0520 01:05:08.222701 3518 log.go:172] (0xc0008dcbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:08.228306 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.228331 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.228376 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.228971 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.228997 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\nI0520 01:05:08.229007 3518 log.go:172] (0xc0008dcbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:08.229463 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.229485 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.229505 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.233027 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.233063 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.233092 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.233626 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.233644 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.233657 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.233668 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.233686 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\nI0520 01:05:08.233699 3518 log.go:172] (0xc0008dcbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:08.237973 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.237990 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.238004 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.238828 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.238867 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.238884 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.238912 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.238926 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\nI0520 01:05:08.238950 3518 log.go:172] (0xc0008dcbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30476/\nI0520 01:05:08.244560 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.244591 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.244616 3518 log.go:172] (0xc0008d43c0) (3) Data frame sent\nI0520 01:05:08.245889 3518 log.go:172] (0xc000a080b0) Data frame received for 3\nI0520 01:05:08.245932 3518 log.go:172] (0xc0008d43c0) (3) Data frame handling\nI0520 01:05:08.245965 3518 log.go:172] (0xc000a080b0) Data frame received for 5\nI0520 01:05:08.245985 3518 log.go:172] (0xc0008dcbe0) (5) Data frame handling\nI0520 01:05:08.248070 3518 log.go:172] (0xc000a080b0) Data frame received for 1\nI0520 01:05:08.248111 3518 log.go:172] (0xc0008dc640) (1) Data frame handling\nI0520 01:05:08.248138 3518 log.go:172] (0xc0008dc640) (1) Data frame sent\nI0520 01:05:08.248196 3518 log.go:172] (0xc000a080b0) (0xc0008dc640) Stream removed, broadcasting: 1\nI0520 01:05:08.248224 3518 log.go:172] (0xc000a080b0) Go away received\nI0520 01:05:08.248430 3518 log.go:172] (0xc000a080b0) (0xc0008dc640) Stream removed, broadcasting: 1\nI0520 01:05:08.248450 3518 log.go:172] (0xc000a080b0) (0xc0008d43c0) Stream removed, broadcasting: 3\nI0520 01:05:08.248456 3518 log.go:172] (0xc000a080b0) (0xc0008dcbe0) Stream removed, broadcasting: 5\n" May 20 01:05:08.254: INFO: stdout: "\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-splp8\naffinity-nodeport-transition-splp8" May 20 01:05:08.254: INFO: Received response from host: May 20 01:05:08.254: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:08.254: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:08.254: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:08.254: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:08.254: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:08.254: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:08.254: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:08.254: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:08.254: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:08.254: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:08.254: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:08.254: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:08.254: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:08.254: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:08.254: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:08.254: INFO: Received response from host: affinity-nodeport-transition-splp8 May 20 01:05:08.254: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-4378, will wait for the garbage collector to delete the pods May 20 01:05:08.326: INFO: Deleting ReplicationController affinity-nodeport-transition took: 6.340612ms May 20 01:05:08.727: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 400.265426ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:05:25.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4378" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:29.801 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":268,"skipped":4460,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:05:25.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:05:41.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4205" for this suite. • [SLOW TEST:16.103 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":288,"completed":269,"skipped":4503,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:05:41.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:05:41.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-9062" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":288,"completed":270,"skipped":4510,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:05:41.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 20 01:05:41.416: INFO: Waiting up to 5m0s for pod "pod-6ca9a6be-92c9-4be2-8ffc-7145c9653bb3" in namespace "emptydir-9771" to be "Succeeded or Failed" May 20 01:05:41.428: INFO: Pod "pod-6ca9a6be-92c9-4be2-8ffc-7145c9653bb3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.891499ms May 20 01:05:43.432: INFO: Pod "pod-6ca9a6be-92c9-4be2-8ffc-7145c9653bb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016005233s May 20 01:05:45.437: INFO: Pod "pod-6ca9a6be-92c9-4be2-8ffc-7145c9653bb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020754136s STEP: Saw pod success May 20 01:05:45.437: INFO: Pod "pod-6ca9a6be-92c9-4be2-8ffc-7145c9653bb3" satisfied condition "Succeeded or Failed" May 20 01:05:45.440: INFO: Trying to get logs from node latest-worker pod pod-6ca9a6be-92c9-4be2-8ffc-7145c9653bb3 container test-container: STEP: delete the pod May 20 01:05:45.534: INFO: Waiting for pod pod-6ca9a6be-92c9-4be2-8ffc-7145c9653bb3 to disappear May 20 01:05:45.552: INFO: Pod pod-6ca9a6be-92c9-4be2-8ffc-7145c9653bb3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:05:45.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9771" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":271,"skipped":4522,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:05:45.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 01:05:45.668: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 20 01:05:48.618: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-11 create -f -' May 20 01:05:51.889: INFO: stderr: "" May 20 01:05:51.889: INFO: stdout: "e2e-test-crd-publish-openapi-6554-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 20 01:05:51.889: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-11 delete e2e-test-crd-publish-openapi-6554-crds test-cr' May 20 01:05:52.018: INFO: stderr: "" May 20 01:05:52.018: INFO: stdout: "e2e-test-crd-publish-openapi-6554-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 20 01:05:52.018: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-11 apply -f -' May 20 01:05:52.313: INFO: stderr: "" May 20 01:05:52.313: INFO: stdout: "e2e-test-crd-publish-openapi-6554-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 20 01:05:52.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-11 delete e2e-test-crd-publish-openapi-6554-crds test-cr' May 20 01:05:52.486: INFO: stderr: "" May 20 01:05:52.486: INFO: stdout: "e2e-test-crd-publish-openapi-6554-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 20 01:05:52.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6554-crds' May 20 01:05:52.800: INFO: stderr: "" May 20 01:05:52.800: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6554-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:05:55.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-11" for this suite. • [SLOW TEST:10.180 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":288,"completed":272,"skipped":4523,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:05:55.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:05:59.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7320" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":288,"completed":273,"skipped":4550,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:05:59.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:06:16.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-342" for this suite. • [SLOW TEST:16.220 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":288,"completed":274,"skipped":4555,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:06:16.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 20 01:06:20.725: INFO: Successfully updated pod "pod-update-activedeadlineseconds-4ebbca5a-9f96-4a48-a216-c6ddff7f92fc" May 20 01:06:20.725: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-4ebbca5a-9f96-4a48-a216-c6ddff7f92fc" in namespace "pods-8961" to be "terminated due to deadline exceeded" May 20 01:06:20.732: INFO: Pod "pod-update-activedeadlineseconds-4ebbca5a-9f96-4a48-a216-c6ddff7f92fc": Phase="Running", Reason="", readiness=true. Elapsed: 6.815461ms May 20 01:06:22.736: INFO: Pod "pod-update-activedeadlineseconds-4ebbca5a-9f96-4a48-a216-c6ddff7f92fc": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.010512941s May 20 01:06:22.736: INFO: Pod "pod-update-activedeadlineseconds-4ebbca5a-9f96-4a48-a216-c6ddff7f92fc" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:06:22.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8961" for this suite. • [SLOW TEST:6.645 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":288,"completed":275,"skipped":4560,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:06:22.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 01:08:22.817: INFO: Deleting pod "var-expansion-19bc123c-ffa2-4001-9999-3fca49e559b1" in namespace "var-expansion-7898" May 20 01:08:22.823: INFO: Wait up to 5m0s for pod "var-expansion-19bc123c-ffa2-4001-9999-3fca49e559b1" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:08:32.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7898" for this suite. • [SLOW TEST:130.106 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":288,"completed":276,"skipped":4562,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:08:32.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 01:08:32.947: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:08:37.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2157" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":288,"completed":277,"skipped":4577,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:08:37.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 20 01:08:41.698: INFO: Successfully updated pod "labelsupdatefac49927-6045-4353-b799-3d4a55e8ef02" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:08:45.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6808" for this suite. • [SLOW TEST:8.735 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":278,"skipped":4603,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:08:45.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-8b6979e1-a0e8-4a89-af3b-926ff769814a STEP: Creating a pod to test consume secrets May 20 01:08:45.832: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-73b34167-2c40-41c7-af76-5022dc94b9bd" in namespace "projected-9016" to be "Succeeded or Failed" May 20 01:08:45.851: INFO: Pod "pod-projected-secrets-73b34167-2c40-41c7-af76-5022dc94b9bd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.561181ms May 20 01:08:47.855: INFO: Pod "pod-projected-secrets-73b34167-2c40-41c7-af76-5022dc94b9bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021951471s May 20 01:08:49.859: INFO: Pod "pod-projected-secrets-73b34167-2c40-41c7-af76-5022dc94b9bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026240808s STEP: Saw pod success May 20 01:08:49.859: INFO: Pod "pod-projected-secrets-73b34167-2c40-41c7-af76-5022dc94b9bd" satisfied condition "Succeeded or Failed" May 20 01:08:49.862: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-73b34167-2c40-41c7-af76-5022dc94b9bd container projected-secret-volume-test: STEP: delete the pod May 20 01:08:49.966: INFO: Waiting for pod pod-projected-secrets-73b34167-2c40-41c7-af76-5022dc94b9bd to disappear May 20 01:08:49.975: INFO: Pod pod-projected-secrets-73b34167-2c40-41c7-af76-5022dc94b9bd no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:08:49.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9016" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":279,"skipped":4604,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:08:49.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 20 01:08:50.038: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ea089736-3d4b-4d3e-99eb-1dd9adae0c4f" in namespace "projected-6099" to be "Succeeded or Failed" May 20 01:08:50.112: INFO: Pod "downwardapi-volume-ea089736-3d4b-4d3e-99eb-1dd9adae0c4f": Phase="Pending", Reason="", readiness=false. Elapsed: 74.124385ms May 20 01:08:52.116: INFO: Pod "downwardapi-volume-ea089736-3d4b-4d3e-99eb-1dd9adae0c4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07817841s May 20 01:08:54.121: INFO: Pod "downwardapi-volume-ea089736-3d4b-4d3e-99eb-1dd9adae0c4f": Phase="Running", Reason="", readiness=true. Elapsed: 4.082883808s May 20 01:08:56.125: INFO: Pod "downwardapi-volume-ea089736-3d4b-4d3e-99eb-1dd9adae0c4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.087507884s STEP: Saw pod success May 20 01:08:56.125: INFO: Pod "downwardapi-volume-ea089736-3d4b-4d3e-99eb-1dd9adae0c4f" satisfied condition "Succeeded or Failed" May 20 01:08:56.128: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ea089736-3d4b-4d3e-99eb-1dd9adae0c4f container client-container: STEP: delete the pod May 20 01:08:56.158: INFO: Waiting for pod downwardapi-volume-ea089736-3d4b-4d3e-99eb-1dd9adae0c4f to disappear May 20 01:08:56.173: INFO: Pod downwardapi-volume-ea089736-3d4b-4d3e-99eb-1dd9adae0c4f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:08:56.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6099" for this suite. • [SLOW TEST:6.199 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":280,"skipped":4615,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:08:56.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 01:08:56.259: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 20 01:08:57.308: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:08:58.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8362" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":288,"completed":281,"skipped":4632,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:08:58.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-72e98c28-229c-431f-9b0d-7ccbb20d45bd STEP: Creating a pod to test consume secrets May 20 01:08:58.888: INFO: Waiting up to 5m0s for pod "pod-secrets-e6a745e1-4c69-4875-9de0-58c7658af3a4" in namespace "secrets-3408" to be "Succeeded or Failed" May 20 01:08:58.980: INFO: Pod "pod-secrets-e6a745e1-4c69-4875-9de0-58c7658af3a4": Phase="Pending", Reason="", readiness=false. Elapsed: 92.735352ms May 20 01:09:01.045: INFO: Pod "pod-secrets-e6a745e1-4c69-4875-9de0-58c7658af3a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157466049s May 20 01:09:03.049: INFO: Pod "pod-secrets-e6a745e1-4c69-4875-9de0-58c7658af3a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.161757021s STEP: Saw pod success May 20 01:09:03.049: INFO: Pod "pod-secrets-e6a745e1-4c69-4875-9de0-58c7658af3a4" satisfied condition "Succeeded or Failed" May 20 01:09:03.051: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-e6a745e1-4c69-4875-9de0-58c7658af3a4 container secret-env-test: STEP: delete the pod May 20 01:09:03.091: INFO: Waiting for pod pod-secrets-e6a745e1-4c69-4875-9de0-58c7658af3a4 to disappear May 20 01:09:03.124: INFO: Pod pod-secrets-e6a745e1-4c69-4875-9de0-58c7658af3a4 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:09:03.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3408" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":288,"completed":282,"skipped":4656,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:09:03.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:09:14.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9278" for this suite. • [SLOW TEST:11.155 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":288,"completed":283,"skipped":4662,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:09:14.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 20 01:09:14.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3278' May 20 01:09:14.648: INFO: stderr: "" May 20 01:09:14.648: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 20 01:09:14.648: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3278' May 20 01:09:14.801: INFO: stderr: "" May 20 01:09:14.801: INFO: stdout: "update-demo-nautilus-t5m6d update-demo-nautilus-vbqgm " May 20 01:09:14.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t5m6d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3278' May 20 01:09:14.898: INFO: stderr: "" May 20 01:09:14.899: INFO: stdout: "" May 20 01:09:14.899: INFO: update-demo-nautilus-t5m6d is created but not running May 20 01:09:19.899: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3278' May 20 01:09:20.029: INFO: stderr: "" May 20 01:09:20.029: INFO: stdout: "update-demo-nautilus-t5m6d update-demo-nautilus-vbqgm " May 20 01:09:20.029: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t5m6d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3278' May 20 01:09:20.120: INFO: stderr: "" May 20 01:09:20.120: INFO: stdout: "true" May 20 01:09:20.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t5m6d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3278' May 20 01:09:20.221: INFO: stderr: "" May 20 01:09:20.221: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 01:09:20.221: INFO: validating pod update-demo-nautilus-t5m6d May 20 01:09:20.224: INFO: got data: { "image": "nautilus.jpg" } May 20 01:09:20.224: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 01:09:20.224: INFO: update-demo-nautilus-t5m6d is verified up and running May 20 01:09:20.224: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vbqgm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3278' May 20 01:09:20.318: INFO: stderr: "" May 20 01:09:20.318: INFO: stdout: "true" May 20 01:09:20.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vbqgm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3278' May 20 01:09:20.400: INFO: stderr: "" May 20 01:09:20.400: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 01:09:20.400: INFO: validating pod update-demo-nautilus-vbqgm May 20 01:09:20.422: INFO: got data: { "image": "nautilus.jpg" } May 20 01:09:20.422: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 01:09:20.422: INFO: update-demo-nautilus-vbqgm is verified up and running STEP: using delete to clean up resources May 20 01:09:20.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3278' May 20 01:09:20.530: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 01:09:20.530: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 20 01:09:20.530: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3278' May 20 01:09:20.640: INFO: stderr: "No resources found in kubectl-3278 namespace.\n" May 20 01:09:20.640: INFO: stdout: "" May 20 01:09:20.640: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3278 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 20 01:09:20.974: INFO: stderr: "" May 20 01:09:20.974: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:09:20.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3278" for this suite. • [SLOW TEST:6.935 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":288,"completed":284,"skipped":4698,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:09:21.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container May 20 01:09:25.879: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4388 pod-service-account-71924a2b-ae4f-4fc3-a020-c876d6e57725 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 20 01:09:26.104: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4388 pod-service-account-71924a2b-ae4f-4fc3-a020-c876d6e57725 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 20 01:09:26.317: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4388 pod-service-account-71924a2b-ae4f-4fc3-a020-c876d6e57725 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:09:26.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4388" for this suite. • [SLOW TEST:5.373 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":288,"completed":285,"skipped":4747,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:09:26.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:09:26.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1048" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":288,"completed":286,"skipped":4752,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:09:26.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-4812 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4812 STEP: Deleting pre-stop pod May 20 01:09:39.930: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:09:39.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4812" for this suite. • [SLOW TEST:13.180 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":288,"completed":287,"skipped":4784,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 01:09:39.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 20 01:09:40.311: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 01:09:40.314: INFO: Number of nodes with available pods: 0 May 20 01:09:40.314: INFO: Node latest-worker is running more than one daemon pod May 20 01:09:41.320: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 01:09:41.323: INFO: Number of nodes with available pods: 0 May 20 01:09:41.323: INFO: Node latest-worker is running more than one daemon pod May 20 01:09:42.320: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 01:09:42.324: INFO: Number of nodes with available pods: 0 May 20 01:09:42.324: INFO: Node latest-worker is running more than one daemon pod May 20 01:09:43.319: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 01:09:43.323: INFO: Number of nodes with available pods: 0 May 20 01:09:43.323: INFO: Node latest-worker is running more than one daemon pod May 20 01:09:44.319: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 01:09:44.323: INFO: Number of nodes with available pods: 0 May 20 01:09:44.323: INFO: Node latest-worker is running more than one daemon pod May 20 01:09:45.342: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 01:09:45.345: INFO: Number of nodes with available pods: 2 May 20 01:09:45.345: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 20 01:09:45.367: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 01:09:45.370: INFO: Number of nodes with available pods: 1 May 20 01:09:45.370: INFO: Node latest-worker2 is running more than one daemon pod May 20 01:09:46.376: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 01:09:46.381: INFO: Number of nodes with available pods: 1 May 20 01:09:46.381: INFO: Node latest-worker2 is running more than one daemon pod May 20 01:09:47.376: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 01:09:47.379: INFO: Number of nodes with available pods: 1 May 20 01:09:47.379: INFO: Node latest-worker2 is running more than one daemon pod May 20 01:09:48.383: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 01:09:48.387: INFO: Number of nodes with available pods: 1 May 20 01:09:48.387: INFO: Node latest-worker2 is running more than one daemon pod May 20 01:09:49.374: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 01:09:49.378: INFO: Number of nodes with available pods: 1 May 20 01:09:49.378: INFO: Node latest-worker2 is running more than one daemon pod May 20 01:09:50.374: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 01:09:50.377: INFO: Number of nodes with available pods: 1 May 20 01:09:50.377: INFO: Node latest-worker2 is running more than one daemon pod May 20 01:09:52.101: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 01:09:52.108: INFO: Number of nodes with available pods: 1 May 20 01:09:52.108: INFO: Node latest-worker2 is running more than one daemon pod May 20 01:09:52.588: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 01:09:52.591: INFO: Number of nodes with available pods: 1 May 20 01:09:52.591: INFO: Node latest-worker2 is running more than one daemon pod May 20 01:09:53.375: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 01:09:53.379: INFO: Number of nodes with available pods: 1 May 20 01:09:53.379: INFO: Node latest-worker2 is running more than one daemon pod May 20 01:09:54.374: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 01:09:54.376: INFO: Number of nodes with available pods: 2 May 20 01:09:54.376: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5616, will wait for the garbage collector to delete the pods May 20 01:09:54.438: INFO: Deleting DaemonSet.extensions daemon-set took: 7.612927ms May 20 01:09:54.738: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.239216ms May 20 01:10:05.247: INFO: Number of nodes with available pods: 0 May 20 01:10:05.247: INFO: Number of running nodes: 0, number of available pods: 0 May 20 01:10:05.250: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5616/daemonsets","resourceVersion":"6104399"},"items":null} May 20 01:10:05.252: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5616/pods","resourceVersion":"6104399"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 01:10:05.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5616" for this suite. • [SLOW TEST:25.306 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":288,"completed":288,"skipped":4790,"failed":0} SSSSSSSSSSSSSSSSSMay 20 01:10:05.266: INFO: Running AfterSuite actions on all nodes May 20 01:10:05.266: INFO: Running AfterSuite actions on node 1 May 20 01:10:05.266: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":288,"completed":288,"skipped":4807,"failed":0} Ran 288 of 5095 Specs in 5490.162 seconds SUCCESS! -- 288 Passed | 0 Failed | 0 Pending | 4807 Skipped PASS