I0506 19:41:33.770316 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0506 19:41:33.770556 7 e2e.go:129] Starting e2e run "007d0f8a-11d6-40be-a500-64001ef56cc7" on Ginkgo node 1 {"msg":"Test Suite starting","total":288,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588794092 - Will randomize all specs Will run 288 of 5094 specs May 6 19:41:33.821: INFO: >>> kubeConfig: /root/.kube/config May 6 19:41:33.825: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 6 19:41:33.850: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 6 19:41:33.929: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 6 19:41:33.929: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 6 19:41:33.929: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 6 19:41:33.939: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 6 19:41:33.939: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 6 19:41:33.939: INFO: e2e test version: v1.19.0-alpha.2.298+0bcbe384d866b9 May 6 19:41:33.940: INFO: kube-apiserver version: v1.18.2 May 6 19:41:33.940: INFO: >>> kubeConfig: /root/.kube/config May 6 19:41:33.945: INFO: Cluster IP family: ipv4 SS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:41:33.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl May 6 19:41:34.018: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 6 19:41:34.021: INFO: namespace kubectl-8000 May 6 19:41:34.021: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8000' May 6 19:41:37.555: INFO: stderr: "" May 6 19:41:37.555: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 6 19:41:38.655: INFO: Selector matched 1 pods for map[app:agnhost] May 6 19:41:38.655: INFO: Found 0 / 1 May 6 19:41:40.085: INFO: Selector matched 1 pods for map[app:agnhost] May 6 19:41:40.085: INFO: Found 0 / 1 May 6 19:41:40.788: INFO: Selector matched 1 pods for map[app:agnhost] May 6 19:41:40.788: INFO: Found 0 / 1 May 6 19:41:41.559: INFO: Selector matched 1 pods for map[app:agnhost] May 6 19:41:41.559: INFO: Found 0 / 1 May 6 19:41:42.559: INFO: Selector matched 1 pods for map[app:agnhost] May 6 19:41:42.559: INFO: Found 0 / 1 May 6 19:41:43.794: INFO: Selector matched 1 pods for map[app:agnhost] May 6 19:41:43.794: INFO: Found 0 / 1 May 6 19:41:44.560: INFO: Selector matched 1 pods for map[app:agnhost] May 6 19:41:44.560: INFO: Found 1 / 1 May 6 19:41:44.560: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 6 19:41:44.563: INFO: Selector matched 1 pods for map[app:agnhost] May 6 19:41:44.563: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 6 19:41:44.563: INFO: wait on agnhost-master startup in kubectl-8000 May 6 19:41:44.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-plg6d agnhost-master --namespace=kubectl-8000' May 6 19:41:44.674: INFO: stderr: "" May 6 19:41:44.674: INFO: stdout: "Paused\n" STEP: exposing RC May 6 19:41:44.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8000' May 6 19:41:44.824: INFO: stderr: "" May 6 19:41:44.824: INFO: stdout: "service/rm2 exposed\n" May 6 19:41:44.883: INFO: Service rm2 in namespace kubectl-8000 found. STEP: exposing service May 6 19:41:46.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8000' May 6 19:41:47.212: INFO: stderr: "" May 6 19:41:47.212: INFO: stdout: "service/rm3 exposed\n" May 6 19:41:47.303: INFO: Service rm3 in namespace kubectl-8000 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:41:49.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8000" for this suite. • [SLOW TEST:15.375 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":288,"completed":1,"skipped":2,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:41:49.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 6 19:41:50.122: INFO: Waiting up to 5m0s for pod "downwardapi-volume-01eb30eb-4577-4d27-a414-5a41bd6a04f4" in namespace "projected-2871" to be "Succeeded or Failed" May 6 19:41:50.146: INFO: Pod "downwardapi-volume-01eb30eb-4577-4d27-a414-5a41bd6a04f4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.009191ms May 6 19:41:52.189: INFO: Pod "downwardapi-volume-01eb30eb-4577-4d27-a414-5a41bd6a04f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066721534s May 6 19:41:54.193: INFO: Pod "downwardapi-volume-01eb30eb-4577-4d27-a414-5a41bd6a04f4": Phase="Running", Reason="", readiness=true. Elapsed: 4.070904818s May 6 19:41:57.052: INFO: Pod "downwardapi-volume-01eb30eb-4577-4d27-a414-5a41bd6a04f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.929599677s STEP: Saw pod success May 6 19:41:57.052: INFO: Pod "downwardapi-volume-01eb30eb-4577-4d27-a414-5a41bd6a04f4" satisfied condition "Succeeded or Failed" May 6 19:41:57.464: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-01eb30eb-4577-4d27-a414-5a41bd6a04f4 container client-container: STEP: delete the pod May 6 19:41:57.955: INFO: Waiting for pod downwardapi-volume-01eb30eb-4577-4d27-a414-5a41bd6a04f4 to disappear May 6 19:41:57.959: INFO: Pod downwardapi-volume-01eb30eb-4577-4d27-a414-5a41bd6a04f4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:41:57.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2871" for this suite. • [SLOW TEST:8.873 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":2,"skipped":49,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:41:58.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 6 19:41:58.641: INFO: >>> kubeConfig: /root/.kube/config May 6 19:42:01.648: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:42:12.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8242" for this suite. • [SLOW TEST:15.036 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":288,"completed":3,"skipped":58,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:42:13.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 19:42:14.240: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 19:42:17.423: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724390934, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724390934, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724390934, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724390934, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 19:42:19.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724390934, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724390934, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724390934, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724390934, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 19:42:21.734: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724390934, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724390934, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724390934, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724390934, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 19:42:24.878: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 19:42:24.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5662-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:42:26.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7566" for this suite. STEP: Destroying namespace "webhook-7566-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.558 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":288,"completed":4,"skipped":60,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:42:26.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 6 19:42:26.856: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c00baf59-0e4f-4477-b76d-c5f148488cc1" in namespace "downward-api-3378" to be "Succeeded or Failed" May 6 19:42:26.890: INFO: Pod "downwardapi-volume-c00baf59-0e4f-4477-b76d-c5f148488cc1": Phase="Pending", Reason="", readiness=false. Elapsed: 33.916354ms May 6 19:42:28.894: INFO: Pod "downwardapi-volume-c00baf59-0e4f-4477-b76d-c5f148488cc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038226325s May 6 19:42:30.955: INFO: Pod "downwardapi-volume-c00baf59-0e4f-4477-b76d-c5f148488cc1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099369601s May 6 19:42:33.002: INFO: Pod "downwardapi-volume-c00baf59-0e4f-4477-b76d-c5f148488cc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.146166716s STEP: Saw pod success May 6 19:42:33.002: INFO: Pod "downwardapi-volume-c00baf59-0e4f-4477-b76d-c5f148488cc1" satisfied condition "Succeeded or Failed" May 6 19:42:33.232: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-c00baf59-0e4f-4477-b76d-c5f148488cc1 container client-container: STEP: delete the pod May 6 19:42:33.801: INFO: Waiting for pod downwardapi-volume-c00baf59-0e4f-4477-b76d-c5f148488cc1 to disappear May 6 19:42:33.860: INFO: Pod downwardapi-volume-c00baf59-0e4f-4477-b76d-c5f148488cc1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:42:33.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3378" for this suite. • [SLOW TEST:7.079 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":288,"completed":5,"skipped":72,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:42:33.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:42:41.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3263" for this suite. • [SLOW TEST:7.662 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":288,"completed":6,"skipped":73,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:42:41.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 6 19:42:52.299: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 19:42:52.375: INFO: Pod pod-with-prestop-exec-hook still exists May 6 19:42:54.376: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 19:42:54.380: INFO: Pod pod-with-prestop-exec-hook still exists May 6 19:42:56.376: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 19:42:56.381: INFO: Pod pod-with-prestop-exec-hook still exists May 6 19:42:58.376: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 19:42:58.381: INFO: Pod pod-with-prestop-exec-hook still exists May 6 19:43:00.376: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 19:43:00.380: INFO: Pod pod-with-prestop-exec-hook still exists May 6 19:43:02.376: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 19:43:02.381: INFO: Pod pod-with-prestop-exec-hook still exists May 6 19:43:04.376: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 19:43:04.380: INFO: Pod pod-with-prestop-exec-hook still exists May 6 19:43:06.376: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 19:43:06.381: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:43:06.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7497" for this suite. • [SLOW TEST:24.874 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":288,"completed":7,"skipped":139,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:43:06.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7902 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7902 I0506 19:43:06.580095 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7902, replica count: 2 I0506 19:43:09.630587 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 19:43:12.630818 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 19:43:12.630: INFO: Creating new exec pod May 6 19:43:17.678: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7902 execpod5zh6p -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 6 19:43:17.886: INFO: stderr: "I0506 19:43:17.809874 129 log.go:172] (0xc000c59760) (0xc0009ca500) Create stream\nI0506 19:43:17.809921 129 log.go:172] (0xc000c59760) (0xc0009ca500) Stream added, broadcasting: 1\nI0506 19:43:17.812730 129 log.go:172] (0xc000c59760) Reply frame received for 1\nI0506 19:43:17.812767 129 log.go:172] (0xc000c59760) (0xc00035e3c0) Create stream\nI0506 19:43:17.812775 129 log.go:172] (0xc000c59760) (0xc00035e3c0) Stream added, broadcasting: 3\nI0506 19:43:17.813953 129 log.go:172] (0xc000c59760) Reply frame received for 3\nI0506 19:43:17.813977 129 log.go:172] (0xc000c59760) (0xc000a321e0) Create stream\nI0506 19:43:17.813987 129 log.go:172] (0xc000c59760) (0xc000a321e0) Stream added, broadcasting: 5\nI0506 19:43:17.814860 129 log.go:172] (0xc000c59760) Reply frame received for 5\nI0506 19:43:17.876971 129 log.go:172] (0xc000c59760) Data frame received for 5\nI0506 19:43:17.877006 129 log.go:172] (0xc000a321e0) (5) Data frame handling\nI0506 19:43:17.877027 129 log.go:172] (0xc000a321e0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0506 19:43:17.877770 129 log.go:172] (0xc000c59760) Data frame received for 5\nI0506 19:43:17.877801 129 log.go:172] (0xc000a321e0) (5) Data frame handling\nI0506 19:43:17.877833 129 log.go:172] (0xc000a321e0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0506 19:43:17.878060 129 log.go:172] (0xc000c59760) Data frame received for 5\nI0506 19:43:17.878083 129 log.go:172] (0xc000a321e0) (5) Data frame handling\nI0506 19:43:17.878402 129 log.go:172] (0xc000c59760) Data frame received for 3\nI0506 19:43:17.878435 129 log.go:172] (0xc00035e3c0) (3) Data frame handling\nI0506 19:43:17.879612 129 log.go:172] (0xc000c59760) Data frame received for 1\nI0506 19:43:17.879640 129 log.go:172] (0xc0009ca500) (1) Data frame handling\nI0506 19:43:17.879670 129 log.go:172] (0xc0009ca500) (1) Data frame sent\nI0506 19:43:17.879692 129 log.go:172] (0xc000c59760) (0xc0009ca500) Stream removed, broadcasting: 1\nI0506 19:43:17.879716 129 log.go:172] (0xc000c59760) Go away received\nI0506 19:43:17.880145 129 log.go:172] (0xc000c59760) (0xc0009ca500) Stream removed, broadcasting: 1\nI0506 19:43:17.880166 129 log.go:172] (0xc000c59760) (0xc00035e3c0) Stream removed, broadcasting: 3\nI0506 19:43:17.880177 129 log.go:172] (0xc000c59760) (0xc000a321e0) Stream removed, broadcasting: 5\n" May 6 19:43:17.886: INFO: stdout: "" May 6 19:43:17.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7902 execpod5zh6p -- /bin/sh -x -c nc -zv -t -w 2 10.104.171.66 80' May 6 19:43:18.115: INFO: stderr: "I0506 19:43:18.046304 149 log.go:172] (0xc000694370) (0xc000139d60) Create stream\nI0506 19:43:18.046381 149 log.go:172] (0xc000694370) (0xc000139d60) Stream added, broadcasting: 1\nI0506 19:43:18.049588 149 log.go:172] (0xc000694370) Reply frame received for 1\nI0506 19:43:18.049622 149 log.go:172] (0xc000694370) (0xc0005446e0) Create stream\nI0506 19:43:18.049631 149 log.go:172] (0xc000694370) (0xc0005446e0) Stream added, broadcasting: 3\nI0506 19:43:18.050718 149 log.go:172] (0xc000694370) Reply frame received for 3\nI0506 19:43:18.050782 149 log.go:172] (0xc000694370) (0xc0004b06e0) Create stream\nI0506 19:43:18.050805 149 log.go:172] (0xc000694370) (0xc0004b06e0) Stream added, broadcasting: 5\nI0506 19:43:18.051799 149 log.go:172] (0xc000694370) Reply frame received for 5\nI0506 19:43:18.108551 149 log.go:172] (0xc000694370) Data frame received for 5\nI0506 19:43:18.108690 149 log.go:172] (0xc0004b06e0) (5) Data frame handling\nI0506 19:43:18.108722 149 log.go:172] (0xc0004b06e0) (5) Data frame sent\nI0506 19:43:18.108737 149 log.go:172] (0xc000694370) Data frame received for 5\nI0506 19:43:18.108747 149 log.go:172] (0xc0004b06e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.171.66 80\nConnection to 10.104.171.66 80 port [tcp/http] succeeded!\nI0506 19:43:18.108778 149 log.go:172] (0xc000694370) Data frame received for 3\nI0506 19:43:18.108810 149 log.go:172] (0xc0005446e0) (3) Data frame handling\nI0506 19:43:18.110248 149 log.go:172] (0xc000694370) Data frame received for 1\nI0506 19:43:18.110281 149 log.go:172] (0xc000139d60) (1) Data frame handling\nI0506 19:43:18.110303 149 log.go:172] (0xc000139d60) (1) Data frame sent\nI0506 19:43:18.110327 149 log.go:172] (0xc000694370) (0xc000139d60) Stream removed, broadcasting: 1\nI0506 19:43:18.110613 149 log.go:172] (0xc000694370) Go away received\nI0506 19:43:18.110748 149 log.go:172] (0xc000694370) (0xc000139d60) Stream removed, broadcasting: 1\nI0506 19:43:18.110776 149 log.go:172] (0xc000694370) (0xc0005446e0) Stream removed, broadcasting: 3\nI0506 19:43:18.110791 149 log.go:172] (0xc000694370) (0xc0004b06e0) Stream removed, broadcasting: 5\n" May 6 19:43:18.115: INFO: stdout: "" May 6 19:43:18.115: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7902 execpod5zh6p -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32581' May 6 19:43:18.336: INFO: stderr: "I0506 19:43:18.259103 169 log.go:172] (0xc000b542c0) (0xc0003da280) Create stream\nI0506 19:43:18.259170 169 log.go:172] (0xc000b542c0) (0xc0003da280) Stream added, broadcasting: 1\nI0506 19:43:18.262924 169 log.go:172] (0xc000b542c0) Reply frame received for 1\nI0506 19:43:18.262956 169 log.go:172] (0xc000b542c0) (0xc00024c6e0) Create stream\nI0506 19:43:18.262964 169 log.go:172] (0xc000b542c0) (0xc00024c6e0) Stream added, broadcasting: 3\nI0506 19:43:18.263648 169 log.go:172] (0xc000b542c0) Reply frame received for 3\nI0506 19:43:18.263675 169 log.go:172] (0xc000b542c0) (0xc0003da8c0) Create stream\nI0506 19:43:18.263684 169 log.go:172] (0xc000b542c0) (0xc0003da8c0) Stream added, broadcasting: 5\nI0506 19:43:18.264291 169 log.go:172] (0xc000b542c0) Reply frame received for 5\nI0506 19:43:18.329010 169 log.go:172] (0xc000b542c0) Data frame received for 5\nI0506 19:43:18.329042 169 log.go:172] (0xc0003da8c0) (5) Data frame handling\nI0506 19:43:18.329056 169 log.go:172] (0xc0003da8c0) (5) Data frame sent\nI0506 19:43:18.329063 169 log.go:172] (0xc000b542c0) Data frame received for 5\nI0506 19:43:18.329069 169 log.go:172] (0xc0003da8c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 32581\nConnection to 172.17.0.13 32581 port [tcp/32581] succeeded!\nI0506 19:43:18.329088 169 log.go:172] (0xc0003da8c0) (5) Data frame sent\nI0506 19:43:18.329771 169 log.go:172] (0xc000b542c0) Data frame received for 5\nI0506 19:43:18.329798 169 log.go:172] (0xc0003da8c0) (5) Data frame handling\nI0506 19:43:18.329823 169 log.go:172] (0xc000b542c0) Data frame received for 3\nI0506 19:43:18.329849 169 log.go:172] (0xc00024c6e0) (3) Data frame handling\nI0506 19:43:18.331173 169 log.go:172] (0xc000b542c0) Data frame received for 1\nI0506 19:43:18.331200 169 log.go:172] (0xc0003da280) (1) Data frame handling\nI0506 19:43:18.331221 169 log.go:172] (0xc0003da280) (1) Data frame sent\nI0506 19:43:18.331247 169 log.go:172] (0xc000b542c0) (0xc0003da280) Stream removed, broadcasting: 1\nI0506 19:43:18.331276 169 log.go:172] (0xc000b542c0) Go away received\nI0506 19:43:18.331498 169 log.go:172] (0xc000b542c0) (0xc0003da280) Stream removed, broadcasting: 1\nI0506 19:43:18.331513 169 log.go:172] (0xc000b542c0) (0xc00024c6e0) Stream removed, broadcasting: 3\nI0506 19:43:18.331519 169 log.go:172] (0xc000b542c0) (0xc0003da8c0) Stream removed, broadcasting: 5\n" May 6 19:43:18.336: INFO: stdout: "" May 6 19:43:18.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7902 execpod5zh6p -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32581' May 6 19:43:18.564: INFO: stderr: "I0506 19:43:18.501098 189 log.go:172] (0xc0009c6000) (0xc0008f6140) Create stream\nI0506 19:43:18.501518 189 log.go:172] (0xc0009c6000) (0xc0008f6140) Stream added, broadcasting: 1\nI0506 19:43:18.504745 189 log.go:172] (0xc0009c6000) Reply frame received for 1\nI0506 19:43:18.504796 189 log.go:172] (0xc0009c6000) (0xc000509180) Create stream\nI0506 19:43:18.504817 189 log.go:172] (0xc0009c6000) (0xc000509180) Stream added, broadcasting: 3\nI0506 19:43:18.506059 189 log.go:172] (0xc0009c6000) Reply frame received for 3\nI0506 19:43:18.506103 189 log.go:172] (0xc0009c6000) (0xc00092f040) Create stream\nI0506 19:43:18.506123 189 log.go:172] (0xc0009c6000) (0xc00092f040) Stream added, broadcasting: 5\nI0506 19:43:18.507125 189 log.go:172] (0xc0009c6000) Reply frame received for 5\nI0506 19:43:18.557448 189 log.go:172] (0xc0009c6000) Data frame received for 5\nI0506 19:43:18.557502 189 log.go:172] (0xc00092f040) (5) Data frame handling\nI0506 19:43:18.557525 189 log.go:172] (0xc00092f040) (5) Data frame sent\nI0506 19:43:18.557537 189 log.go:172] (0xc0009c6000) Data frame received for 5\nI0506 19:43:18.557544 189 log.go:172] (0xc00092f040) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32581\nConnection to 172.17.0.12 32581 port [tcp/32581] succeeded!\nI0506 19:43:18.557566 189 log.go:172] (0xc0009c6000) Data frame received for 3\nI0506 19:43:18.557592 189 log.go:172] (0xc000509180) (3) Data frame handling\nI0506 19:43:18.559101 189 log.go:172] (0xc0009c6000) Data frame received for 1\nI0506 19:43:18.559126 189 log.go:172] (0xc0008f6140) (1) Data frame handling\nI0506 19:43:18.559142 189 log.go:172] (0xc0008f6140) (1) Data frame sent\nI0506 19:43:18.559153 189 log.go:172] (0xc0009c6000) (0xc0008f6140) Stream removed, broadcasting: 1\nI0506 19:43:18.559172 189 log.go:172] (0xc0009c6000) Go away received\nI0506 19:43:18.559513 189 log.go:172] (0xc0009c6000) (0xc0008f6140) Stream removed, broadcasting: 1\nI0506 19:43:18.559529 189 log.go:172] (0xc0009c6000) (0xc000509180) Stream removed, broadcasting: 3\nI0506 19:43:18.559537 189 log.go:172] (0xc0009c6000) (0xc00092f040) Stream removed, broadcasting: 5\n" May 6 19:43:18.564: INFO: stdout: "" May 6 19:43:18.564: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:43:18.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7902" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.354 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":288,"completed":8,"skipped":140,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:43:18.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 6 19:43:19.068: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 6 19:43:30.430: INFO: >>> kubeConfig: /root/.kube/config May 6 19:43:33.419: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:43:44.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2902" for this suite. • [SLOW TEST:25.540 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":288,"completed":9,"skipped":172,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:43:44.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-8fab2b8c-76ff-455c-a588-d545b9ccc3af STEP: Creating the pod STEP: Updating configmap configmap-test-upd-8fab2b8c-76ff-455c-a588-d545b9ccc3af STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:43:50.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4379" for this suite. • [SLOW TEST:6.191 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":10,"skipped":217,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:43:50.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-e26e0ba0-d774-459c-80ef-66e87748d730 STEP: Creating a pod to test consume configMaps May 6 19:43:50.671: INFO: Waiting up to 5m0s for pod "pod-configmaps-f9a529e2-64b0-4c3d-b575-080123600c34" in namespace "configmap-6002" to be "Succeeded or Failed" May 6 19:43:50.698: INFO: Pod "pod-configmaps-f9a529e2-64b0-4c3d-b575-080123600c34": Phase="Pending", Reason="", readiness=false. Elapsed: 27.038562ms May 6 19:43:52.867: INFO: Pod "pod-configmaps-f9a529e2-64b0-4c3d-b575-080123600c34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195788452s May 6 19:43:54.870: INFO: Pod "pod-configmaps-f9a529e2-64b0-4c3d-b575-080123600c34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199037028s May 6 19:43:56.888: INFO: Pod "pod-configmaps-f9a529e2-64b0-4c3d-b575-080123600c34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.216900444s STEP: Saw pod success May 6 19:43:56.888: INFO: Pod "pod-configmaps-f9a529e2-64b0-4c3d-b575-080123600c34" satisfied condition "Succeeded or Failed" May 6 19:43:56.890: INFO: Trying to get logs from node latest-worker pod pod-configmaps-f9a529e2-64b0-4c3d-b575-080123600c34 container configmap-volume-test: STEP: delete the pod May 6 19:43:57.257: INFO: Waiting for pod pod-configmaps-f9a529e2-64b0-4c3d-b575-080123600c34 to disappear May 6 19:43:57.307: INFO: Pod pod-configmaps-f9a529e2-64b0-4c3d-b575-080123600c34 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:43:57.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6002" for this suite. • [SLOW TEST:6.820 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":11,"skipped":228,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:43:57.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 6 19:43:57.440: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:44:15.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-626" for this suite. • [SLOW TEST:18.678 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":288,"completed":12,"skipped":240,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:44:15.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:44:32.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1183" for this suite. • [SLOW TEST:16.413 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":288,"completed":13,"skipped":245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:44:32.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server May 6 19:44:32.466: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:44:32.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9408" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":288,"completed":14,"skipped":269,"failed":0} ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:44:32.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath May 6 19:44:32.734: INFO: Waiting up to 5m0s for pod "var-expansion-d876e980-f34d-49be-a9d8-23917a5b81b6" in namespace "var-expansion-8210" to be "Succeeded or Failed" May 6 19:44:32.763: INFO: Pod "var-expansion-d876e980-f34d-49be-a9d8-23917a5b81b6": Phase="Pending", Reason="", readiness=false. Elapsed: 28.658636ms May 6 19:44:34.767: INFO: Pod "var-expansion-d876e980-f34d-49be-a9d8-23917a5b81b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033129577s May 6 19:44:36.772: INFO: Pod "var-expansion-d876e980-f34d-49be-a9d8-23917a5b81b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037632261s May 6 19:44:38.861: INFO: Pod "var-expansion-d876e980-f34d-49be-a9d8-23917a5b81b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.126626628s STEP: Saw pod success May 6 19:44:38.861: INFO: Pod "var-expansion-d876e980-f34d-49be-a9d8-23917a5b81b6" satisfied condition "Succeeded or Failed" May 6 19:44:38.863: INFO: Trying to get logs from node latest-worker pod var-expansion-d876e980-f34d-49be-a9d8-23917a5b81b6 container dapi-container: STEP: delete the pod May 6 19:44:39.367: INFO: Waiting for pod var-expansion-d876e980-f34d-49be-a9d8-23917a5b81b6 to disappear May 6 19:44:39.403: INFO: Pod var-expansion-d876e980-f34d-49be-a9d8-23917a5b81b6 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:44:39.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8210" for this suite. • [SLOW TEST:6.907 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":288,"completed":15,"skipped":269,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:44:39.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 6 19:44:39.683: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. May 6 19:44:40.413: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 6 19:44:43.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391080, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391080, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391080, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391080, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 19:44:45.322: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391080, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391080, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391080, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391080, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 19:44:48.027: INFO: Waited 735.344642ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:44:50.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-8704" for this suite. • [SLOW TEST:11.817 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":288,"completed":16,"skipped":275,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:44:51.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 19:44:52.166: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-554a1e2c-a97e-4aa2-b66d-c83d7be22564" in namespace "security-context-test-9769" to be "Succeeded or Failed" May 6 19:44:52.481: INFO: Pod "busybox-privileged-false-554a1e2c-a97e-4aa2-b66d-c83d7be22564": Phase="Pending", Reason="", readiness=false. Elapsed: 315.23549ms May 6 19:44:54.484: INFO: Pod "busybox-privileged-false-554a1e2c-a97e-4aa2-b66d-c83d7be22564": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318621396s May 6 19:44:56.669: INFO: Pod "busybox-privileged-false-554a1e2c-a97e-4aa2-b66d-c83d7be22564": Phase="Pending", Reason="", readiness=false. Elapsed: 4.503733569s May 6 19:44:58.711: INFO: Pod "busybox-privileged-false-554a1e2c-a97e-4aa2-b66d-c83d7be22564": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.54531558s May 6 19:44:58.711: INFO: Pod "busybox-privileged-false-554a1e2c-a97e-4aa2-b66d-c83d7be22564" satisfied condition "Succeeded or Failed" May 6 19:44:58.728: INFO: Got logs for pod "busybox-privileged-false-554a1e2c-a97e-4aa2-b66d-c83d7be22564": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:44:58.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9769" for this suite. • [SLOW TEST:7.434 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":17,"skipped":289,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:44:58.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-928c0ed8-fe76-42cb-aa50-6216804af82f in namespace container-probe-5749 May 6 19:45:03.657: INFO: Started pod busybox-928c0ed8-fe76-42cb-aa50-6216804af82f in namespace container-probe-5749 STEP: checking the pod's current state and verifying that restartCount is present May 6 19:45:03.660: INFO: Initial restart count of pod busybox-928c0ed8-fe76-42cb-aa50-6216804af82f is 0 May 6 19:45:54.337: INFO: Restart count of pod container-probe-5749/busybox-928c0ed8-fe76-42cb-aa50-6216804af82f is now 1 (50.676839487s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:45:54.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5749" for this suite. • [SLOW TEST:55.657 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":18,"skipped":307,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:45:54.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 19:45:54.511: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 6 19:45:54.531: INFO: Pod name sample-pod: Found 0 pods out of 1 May 6 19:45:59.766: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 6 19:45:59.766: INFO: Creating deployment "test-rolling-update-deployment" May 6 19:46:00.053: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 6 19:46:00.096: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 6 19:46:02.433: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 6 19:46:02.435: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391160, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391160, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391160, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391160, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 19:46:04.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391160, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391160, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391160, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391160, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 19:46:06.439: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 6 19:46:06.447: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7096 /apis/apps/v1/namespaces/deployment-7096/deployments/test-rolling-update-deployment c2ed366e-6377-434f-859b-27e911a54e51 2078879 1 2020-05-06 19:45:59 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-06 19:45:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-06 19:46:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004a38ae8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-06 19:46:00 +0000 UTC,LastTransitionTime:2020-05-06 19:46:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-05-06 19:46:04 +0000 UTC,LastTransitionTime:2020-05-06 19:46:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 6 19:46:06.451: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-7096 /apis/apps/v1/namespaces/deployment-7096/replicasets/test-rolling-update-deployment-df7bb669b afad8bc4-06cc-43ea-8ee8-181e9bd5d437 2078868 1 2020-05-06 19:46:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment c2ed366e-6377-434f-859b-27e911a54e51 0xc004a39040 0xc004a39041}] [] [{kube-controller-manager Update apps/v1 2020-05-06 19:46:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c2ed366e-6377-434f-859b-27e911a54e51\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004a390b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 6 19:46:06.451: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 6 19:46:06.451: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7096 /apis/apps/v1/namespaces/deployment-7096/replicasets/test-rolling-update-controller 24a0795a-a19a-4c67-8e3a-3fa44c6736b7 2078878 2 2020-05-06 19:45:54 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment c2ed366e-6377-434f-859b-27e911a54e51 0xc004a38f37 0xc004a38f38}] [] [{e2e.test Update apps/v1 2020-05-06 19:45:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-06 19:46:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c2ed366e-6377-434f-859b-27e911a54e51\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004a38fd8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 19:46:06.454: INFO: Pod "test-rolling-update-deployment-df7bb669b-zzvk6" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-zzvk6 test-rolling-update-deployment-df7bb669b- deployment-7096 /api/v1/namespaces/deployment-7096/pods/test-rolling-update-deployment-df7bb669b-zzvk6 d0f10986-b388-40e3-a9d0-eacad201b0a1 2078867 0 2020-05-06 19:46:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b afad8bc4-06cc-43ea-8ee8-181e9bd5d437 0xc004a39570 0xc004a39571}] [] [{kube-controller-manager Update v1 2020-05-06 19:46:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"afad8bc4-06cc-43ea-8ee8-181e9bd5d437\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-06 19:46:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.18\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9jh2k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9jh2k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9jh2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:46:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:46:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:46:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:46:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.18,StartTime:2020-05-06 19:46:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 19:46:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://2f2ef4e639ec058602273d0f1f0b20333316716c8d3522f30fd7aa43bc537cd4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:46:06.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7096" for this suite. • [SLOW TEST:12.067 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":19,"skipped":325,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:46:06.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 6 19:46:06.590: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:46:22.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1555" for this suite. • [SLOW TEST:16.408 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":288,"completed":20,"skipped":326,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:46:22.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 19:46:23.632: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 19:46:26.395: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391183, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391183, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391183, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391183, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 19:46:28.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391183, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391183, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391183, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391183, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 19:46:31.508: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:46:32.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9359" for this suite. STEP: Destroying namespace "webhook-9359-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.183 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":288,"completed":21,"skipped":326,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:46:33.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 6 19:46:33.254: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 19:46:33.595: INFO: Waiting for terminating namespaces to be deleted... May 6 19:46:33.598: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 6 19:46:33.601: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 6 19:46:33.602: INFO: Container kindnet-cni ready: true, restart count 0 May 6 19:46:33.602: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 6 19:46:33.602: INFO: Container kube-proxy ready: true, restart count 0 May 6 19:46:33.602: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 6 19:46:33.606: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 6 19:46:33.606: INFO: Container kindnet-cni ready: true, restart count 0 May 6 19:46:33.606: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 6 19:46:33.606: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-039129b8-588b-4347-bcc1-27d1a91d5f12 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-039129b8-588b-4347-bcc1-27d1a91d5f12 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-039129b8-588b-4347-bcc1-27d1a91d5f12 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:51:48.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8210" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:315.225 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":288,"completed":22,"skipped":332,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:51:48.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2936 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 6 19:51:48.416: INFO: Found 0 stateful pods, waiting for 3 May 6 19:51:58.430: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 19:51:58.430: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 19:51:58.430: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 6 19:52:08.483: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 19:52:08.483: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 19:52:08.483: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 6 19:52:08.493: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2936 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 19:52:16.005: INFO: stderr: "I0506 19:52:15.883577 229 log.go:172] (0xc0007151e0) (0xc0005837c0) Create stream\nI0506 19:52:15.883623 229 log.go:172] (0xc0007151e0) (0xc0005837c0) Stream added, broadcasting: 1\nI0506 19:52:15.886194 229 log.go:172] (0xc0007151e0) Reply frame received for 1\nI0506 19:52:15.886241 229 log.go:172] (0xc0007151e0) (0xc000583a40) Create stream\nI0506 19:52:15.886250 229 log.go:172] (0xc0007151e0) (0xc000583a40) Stream added, broadcasting: 3\nI0506 19:52:15.887151 229 log.go:172] (0xc0007151e0) Reply frame received for 3\nI0506 19:52:15.887186 229 log.go:172] (0xc0007151e0) (0xc0006720a0) Create stream\nI0506 19:52:15.887199 229 log.go:172] (0xc0007151e0) (0xc0006720a0) Stream added, broadcasting: 5\nI0506 19:52:15.887979 229 log.go:172] (0xc0007151e0) Reply frame received for 5\nI0506 19:52:15.946344 229 log.go:172] (0xc0007151e0) Data frame received for 5\nI0506 19:52:15.946384 229 log.go:172] (0xc0006720a0) (5) Data frame handling\nI0506 19:52:15.946408 229 log.go:172] (0xc0006720a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 19:52:15.992965 229 log.go:172] (0xc0007151e0) Data frame received for 3\nI0506 19:52:15.992999 229 log.go:172] (0xc000583a40) (3) Data frame handling\nI0506 19:52:15.993013 229 log.go:172] (0xc000583a40) (3) Data frame sent\nI0506 19:52:15.993229 229 log.go:172] (0xc0007151e0) Data frame received for 3\nI0506 19:52:15.993250 229 log.go:172] (0xc000583a40) (3) Data frame handling\nI0506 19:52:15.993643 229 log.go:172] (0xc0007151e0) Data frame received for 5\nI0506 19:52:15.993657 229 log.go:172] (0xc0006720a0) (5) Data frame handling\nI0506 19:52:15.995839 229 log.go:172] (0xc0007151e0) Data frame received for 1\nI0506 19:52:15.995850 229 log.go:172] (0xc0005837c0) (1) Data frame handling\nI0506 19:52:15.995856 229 log.go:172] (0xc0005837c0) (1) Data frame sent\nI0506 19:52:15.996031 229 log.go:172] (0xc0007151e0) (0xc0005837c0) Stream removed, broadcasting: 1\nI0506 19:52:15.996055 229 log.go:172] (0xc0007151e0) Go away received\nI0506 19:52:15.996375 229 log.go:172] (0xc0007151e0) (0xc0005837c0) Stream removed, broadcasting: 1\nI0506 19:52:15.996389 229 log.go:172] (0xc0007151e0) (0xc000583a40) Stream removed, broadcasting: 3\nI0506 19:52:15.996395 229 log.go:172] (0xc0007151e0) (0xc0006720a0) Stream removed, broadcasting: 5\n" May 6 19:52:16.005: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 19:52:16.006: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 6 19:52:26.040: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 6 19:52:36.351: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2936 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 19:52:36.739: INFO: stderr: "I0506 19:52:36.535263 258 log.go:172] (0xc000b7cfd0) (0xc000807f40) Create stream\nI0506 19:52:36.535316 258 log.go:172] (0xc000b7cfd0) (0xc000807f40) Stream added, broadcasting: 1\nI0506 19:52:36.540456 258 log.go:172] (0xc000b7cfd0) Reply frame received for 1\nI0506 19:52:36.540509 258 log.go:172] (0xc000b7cfd0) (0xc000558280) Create stream\nI0506 19:52:36.540530 258 log.go:172] (0xc000b7cfd0) (0xc000558280) Stream added, broadcasting: 3\nI0506 19:52:36.541733 258 log.go:172] (0xc000b7cfd0) Reply frame received for 3\nI0506 19:52:36.541768 258 log.go:172] (0xc000b7cfd0) (0xc000532dc0) Create stream\nI0506 19:52:36.541776 258 log.go:172] (0xc000b7cfd0) (0xc000532dc0) Stream added, broadcasting: 5\nI0506 19:52:36.542669 258 log.go:172] (0xc000b7cfd0) Reply frame received for 5\nI0506 19:52:36.593009 258 log.go:172] (0xc000b7cfd0) Data frame received for 5\nI0506 19:52:36.593040 258 log.go:172] (0xc000532dc0) (5) Data frame handling\nI0506 19:52:36.593059 258 log.go:172] (0xc000532dc0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 19:52:36.730541 258 log.go:172] (0xc000b7cfd0) Data frame received for 3\nI0506 19:52:36.730567 258 log.go:172] (0xc000558280) (3) Data frame handling\nI0506 19:52:36.730579 258 log.go:172] (0xc000558280) (3) Data frame sent\nI0506 19:52:36.730588 258 log.go:172] (0xc000b7cfd0) Data frame received for 3\nI0506 19:52:36.730594 258 log.go:172] (0xc000558280) (3) Data frame handling\nI0506 19:52:36.730677 258 log.go:172] (0xc000b7cfd0) Data frame received for 5\nI0506 19:52:36.730685 258 log.go:172] (0xc000532dc0) (5) Data frame handling\nI0506 19:52:36.732575 258 log.go:172] (0xc000b7cfd0) Data frame received for 1\nI0506 19:52:36.732621 258 log.go:172] (0xc000807f40) (1) Data frame handling\nI0506 19:52:36.732650 258 log.go:172] (0xc000807f40) (1) Data frame sent\nI0506 19:52:36.732677 258 log.go:172] (0xc000b7cfd0) (0xc000807f40) Stream removed, broadcasting: 1\nI0506 19:52:36.732715 258 log.go:172] (0xc000b7cfd0) Go away received\nI0506 19:52:36.733533 258 log.go:172] (0xc000b7cfd0) (0xc000807f40) Stream removed, broadcasting: 1\nI0506 19:52:36.733569 258 log.go:172] (0xc000b7cfd0) (0xc000558280) Stream removed, broadcasting: 3\nI0506 19:52:36.733585 258 log.go:172] (0xc000b7cfd0) (0xc000532dc0) Stream removed, broadcasting: 5\n" May 6 19:52:36.739: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 19:52:36.739: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 19:52:46.791: INFO: Waiting for StatefulSet statefulset-2936/ss2 to complete update May 6 19:52:46.791: INFO: Waiting for Pod statefulset-2936/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 6 19:52:46.791: INFO: Waiting for Pod statefulset-2936/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 6 19:52:56.845: INFO: Waiting for StatefulSet statefulset-2936/ss2 to complete update May 6 19:52:56.845: INFO: Waiting for Pod statefulset-2936/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 6 19:53:06.800: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2936 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 19:53:07.220: INFO: stderr: "I0506 19:53:06.928824 277 log.go:172] (0xc000a6f130) (0xc000af2280) Create stream\nI0506 19:53:06.928883 277 log.go:172] (0xc000a6f130) (0xc000af2280) Stream added, broadcasting: 1\nI0506 19:53:06.932623 277 log.go:172] (0xc000a6f130) Reply frame received for 1\nI0506 19:53:06.932664 277 log.go:172] (0xc000a6f130) (0xc0005cc640) Create stream\nI0506 19:53:06.932683 277 log.go:172] (0xc000a6f130) (0xc0005cc640) Stream added, broadcasting: 3\nI0506 19:53:06.933939 277 log.go:172] (0xc000a6f130) Reply frame received for 3\nI0506 19:53:06.933968 277 log.go:172] (0xc000a6f130) (0xc0004f4640) Create stream\nI0506 19:53:06.933976 277 log.go:172] (0xc000a6f130) (0xc0004f4640) Stream added, broadcasting: 5\nI0506 19:53:06.934730 277 log.go:172] (0xc000a6f130) Reply frame received for 5\nI0506 19:53:07.002892 277 log.go:172] (0xc000a6f130) Data frame received for 5\nI0506 19:53:07.002921 277 log.go:172] (0xc0004f4640) (5) Data frame handling\nI0506 19:53:07.002940 277 log.go:172] (0xc0004f4640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 19:53:07.213952 277 log.go:172] (0xc000a6f130) Data frame received for 3\nI0506 19:53:07.213993 277 log.go:172] (0xc0005cc640) (3) Data frame handling\nI0506 19:53:07.214029 277 log.go:172] (0xc000a6f130) Data frame received for 5\nI0506 19:53:07.214063 277 log.go:172] (0xc0004f4640) (5) Data frame handling\nI0506 19:53:07.214090 277 log.go:172] (0xc0005cc640) (3) Data frame sent\nI0506 19:53:07.214100 277 log.go:172] (0xc000a6f130) Data frame received for 3\nI0506 19:53:07.214105 277 log.go:172] (0xc0005cc640) (3) Data frame handling\nI0506 19:53:07.216126 277 log.go:172] (0xc000a6f130) Data frame received for 1\nI0506 19:53:07.216147 277 log.go:172] (0xc000af2280) (1) Data frame handling\nI0506 19:53:07.216166 277 log.go:172] (0xc000af2280) (1) Data frame sent\nI0506 19:53:07.216184 277 log.go:172] (0xc000a6f130) (0xc000af2280) Stream removed, broadcasting: 1\nI0506 19:53:07.216286 277 log.go:172] (0xc000a6f130) Go away received\nI0506 19:53:07.216519 277 log.go:172] (0xc000a6f130) (0xc000af2280) Stream removed, broadcasting: 1\nI0506 19:53:07.216538 277 log.go:172] (0xc000a6f130) (0xc0005cc640) Stream removed, broadcasting: 3\nI0506 19:53:07.216548 277 log.go:172] (0xc000a6f130) (0xc0004f4640) Stream removed, broadcasting: 5\n" May 6 19:53:07.221: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 19:53:07.221: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 19:53:17.252: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 6 19:53:27.564: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2936 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 19:53:27.806: INFO: stderr: "I0506 19:53:27.724974 297 log.go:172] (0xc000a989a0) (0xc000ae46e0) Create stream\nI0506 19:53:27.725045 297 log.go:172] (0xc000a989a0) (0xc000ae46e0) Stream added, broadcasting: 1\nI0506 19:53:27.730564 297 log.go:172] (0xc000a989a0) Reply frame received for 1\nI0506 19:53:27.730620 297 log.go:172] (0xc000a989a0) (0xc000254460) Create stream\nI0506 19:53:27.730644 297 log.go:172] (0xc000a989a0) (0xc000254460) Stream added, broadcasting: 3\nI0506 19:53:27.731655 297 log.go:172] (0xc000a989a0) Reply frame received for 3\nI0506 19:53:27.731702 297 log.go:172] (0xc000a989a0) (0xc0006d0b40) Create stream\nI0506 19:53:27.731725 297 log.go:172] (0xc000a989a0) (0xc0006d0b40) Stream added, broadcasting: 5\nI0506 19:53:27.732709 297 log.go:172] (0xc000a989a0) Reply frame received for 5\nI0506 19:53:27.796785 297 log.go:172] (0xc000a989a0) Data frame received for 5\nI0506 19:53:27.796826 297 log.go:172] (0xc0006d0b40) (5) Data frame handling\nI0506 19:53:27.796852 297 log.go:172] (0xc0006d0b40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 19:53:27.798662 297 log.go:172] (0xc000a989a0) Data frame received for 3\nI0506 19:53:27.798682 297 log.go:172] (0xc000254460) (3) Data frame handling\nI0506 19:53:27.798691 297 log.go:172] (0xc000254460) (3) Data frame sent\nI0506 19:53:27.799203 297 log.go:172] (0xc000a989a0) Data frame received for 3\nI0506 19:53:27.799223 297 log.go:172] (0xc000254460) (3) Data frame handling\nI0506 19:53:27.799267 297 log.go:172] (0xc000a989a0) Data frame received for 5\nI0506 19:53:27.799309 297 log.go:172] (0xc0006d0b40) (5) Data frame handling\nI0506 19:53:27.801461 297 log.go:172] (0xc000a989a0) Data frame received for 1\nI0506 19:53:27.801483 297 log.go:172] (0xc000ae46e0) (1) Data frame handling\nI0506 19:53:27.801494 297 log.go:172] (0xc000ae46e0) (1) Data frame sent\nI0506 19:53:27.801507 297 log.go:172] (0xc000a989a0) (0xc000ae46e0) Stream removed, broadcasting: 1\nI0506 19:53:27.801545 297 log.go:172] (0xc000a989a0) Go away received\nI0506 19:53:27.801812 297 log.go:172] (0xc000a989a0) (0xc000ae46e0) Stream removed, broadcasting: 1\nI0506 19:53:27.801833 297 log.go:172] (0xc000a989a0) (0xc000254460) Stream removed, broadcasting: 3\nI0506 19:53:27.801846 297 log.go:172] (0xc000a989a0) (0xc0006d0b40) Stream removed, broadcasting: 5\n" May 6 19:53:27.806: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 19:53:27.806: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 19:53:37.930: INFO: Waiting for StatefulSet statefulset-2936/ss2 to complete update May 6 19:53:37.930: INFO: Waiting for Pod statefulset-2936/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 6 19:53:37.930: INFO: Waiting for Pod statefulset-2936/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 6 19:53:47.938: INFO: Waiting for StatefulSet statefulset-2936/ss2 to complete update May 6 19:53:47.938: INFO: Waiting for Pod statefulset-2936/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 6 19:53:57.939: INFO: Waiting for StatefulSet statefulset-2936/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 6 19:54:07.939: INFO: Deleting all statefulset in ns statefulset-2936 May 6 19:54:07.942: INFO: Scaling statefulset ss2 to 0 May 6 19:54:38.002: INFO: Waiting for statefulset status.replicas updated to 0 May 6 19:54:38.005: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:54:38.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2936" for this suite. • [SLOW TEST:169.748 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":288,"completed":23,"skipped":340,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:54:38.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 6 19:54:43.703: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:54:43.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-189" for this suite. • [SLOW TEST:5.919 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":24,"skipped":342,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:54:43.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-45811820-00ed-4801-834b-07dcf40b4245 STEP: Creating a pod to test consume configMaps May 6 19:54:44.094: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-48f365be-711c-46a4-8682-3e196ca9e166" in namespace "projected-5992" to be "Succeeded or Failed" May 6 19:54:44.251: INFO: Pod "pod-projected-configmaps-48f365be-711c-46a4-8682-3e196ca9e166": Phase="Pending", Reason="", readiness=false. Elapsed: 156.159712ms May 6 19:54:46.525: INFO: Pod "pod-projected-configmaps-48f365be-711c-46a4-8682-3e196ca9e166": Phase="Pending", Reason="", readiness=false. Elapsed: 2.430838235s May 6 19:54:48.530: INFO: Pod "pod-projected-configmaps-48f365be-711c-46a4-8682-3e196ca9e166": Phase="Running", Reason="", readiness=true. Elapsed: 4.435833597s May 6 19:54:50.535: INFO: Pod "pod-projected-configmaps-48f365be-711c-46a4-8682-3e196ca9e166": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.440339497s STEP: Saw pod success May 6 19:54:50.535: INFO: Pod "pod-projected-configmaps-48f365be-711c-46a4-8682-3e196ca9e166" satisfied condition "Succeeded or Failed" May 6 19:54:50.538: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-48f365be-711c-46a4-8682-3e196ca9e166 container projected-configmap-volume-test: STEP: delete the pod May 6 19:54:50.584: INFO: Waiting for pod pod-projected-configmaps-48f365be-711c-46a4-8682-3e196ca9e166 to disappear May 6 19:54:50.617: INFO: Pod pod-projected-configmaps-48f365be-711c-46a4-8682-3e196ca9e166 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:54:50.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5992" for this suite. • [SLOW TEST:6.678 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":25,"skipped":345,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:54:50.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-6080 STEP: creating replication controller nodeport-test in namespace services-6080 I0506 19:54:51.095548 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-6080, replica count: 2 I0506 19:54:54.146030 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 19:54:57.146256 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 19:55:00.146478 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 19:55:03.146834 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 19:55:03.146: INFO: Creating new exec pod May 6 19:55:10.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6080 execpodzm76j -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 6 19:55:10.397: INFO: stderr: "I0506 19:55:10.328024 317 log.go:172] (0xc000962790) (0xc000386f00) Create stream\nI0506 19:55:10.328094 317 log.go:172] (0xc000962790) (0xc000386f00) Stream added, broadcasting: 1\nI0506 19:55:10.330754 317 log.go:172] (0xc000962790) Reply frame received for 1\nI0506 19:55:10.330804 317 log.go:172] (0xc000962790) (0xc000674640) Create stream\nI0506 19:55:10.330828 317 log.go:172] (0xc000962790) (0xc000674640) Stream added, broadcasting: 3\nI0506 19:55:10.331713 317 log.go:172] (0xc000962790) Reply frame received for 3\nI0506 19:55:10.331754 317 log.go:172] (0xc000962790) (0xc000628d20) Create stream\nI0506 19:55:10.331779 317 log.go:172] (0xc000962790) (0xc000628d20) Stream added, broadcasting: 5\nI0506 19:55:10.332513 317 log.go:172] (0xc000962790) Reply frame received for 5\nI0506 19:55:10.388955 317 log.go:172] (0xc000962790) Data frame received for 5\nI0506 19:55:10.388987 317 log.go:172] (0xc000628d20) (5) Data frame handling\nI0506 19:55:10.389015 317 log.go:172] (0xc000628d20) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0506 19:55:10.389346 317 log.go:172] (0xc000962790) Data frame received for 5\nI0506 19:55:10.389364 317 log.go:172] (0xc000628d20) (5) Data frame handling\nI0506 19:55:10.389375 317 log.go:172] (0xc000628d20) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0506 19:55:10.389643 317 log.go:172] (0xc000962790) Data frame received for 3\nI0506 19:55:10.389661 317 log.go:172] (0xc000674640) (3) Data frame handling\nI0506 19:55:10.389690 317 log.go:172] (0xc000962790) Data frame received for 5\nI0506 19:55:10.389699 317 log.go:172] (0xc000628d20) (5) Data frame handling\nI0506 19:55:10.391107 317 log.go:172] (0xc000962790) Data frame received for 1\nI0506 19:55:10.391129 317 log.go:172] (0xc000386f00) (1) Data frame handling\nI0506 19:55:10.391157 317 log.go:172] (0xc000386f00) (1) Data frame sent\nI0506 19:55:10.391177 317 log.go:172] (0xc000962790) (0xc000386f00) Stream removed, broadcasting: 1\nI0506 19:55:10.391238 317 log.go:172] (0xc000962790) Go away received\nI0506 19:55:10.391541 317 log.go:172] (0xc000962790) (0xc000386f00) Stream removed, broadcasting: 1\nI0506 19:55:10.391563 317 log.go:172] (0xc000962790) (0xc000674640) Stream removed, broadcasting: 3\nI0506 19:55:10.391579 317 log.go:172] (0xc000962790) (0xc000628d20) Stream removed, broadcasting: 5\n" May 6 19:55:10.397: INFO: stdout: "" May 6 19:55:10.398: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6080 execpodzm76j -- /bin/sh -x -c nc -zv -t -w 2 10.104.148.190 80' May 6 19:55:10.599: INFO: stderr: "I0506 19:55:10.531121 336 log.go:172] (0xc000b8b4a0) (0xc000b7c3c0) Create stream\nI0506 19:55:10.531181 336 log.go:172] (0xc000b8b4a0) (0xc000b7c3c0) Stream added, broadcasting: 1\nI0506 19:55:10.534514 336 log.go:172] (0xc000b8b4a0) Reply frame received for 1\nI0506 19:55:10.534662 336 log.go:172] (0xc000b8b4a0) (0xc0005f7400) Create stream\nI0506 19:55:10.534739 336 log.go:172] (0xc000b8b4a0) (0xc0005f7400) Stream added, broadcasting: 3\nI0506 19:55:10.537021 336 log.go:172] (0xc000b8b4a0) Reply frame received for 3\nI0506 19:55:10.537093 336 log.go:172] (0xc000b8b4a0) (0xc00084abe0) Create stream\nI0506 19:55:10.537391 336 log.go:172] (0xc000b8b4a0) (0xc00084abe0) Stream added, broadcasting: 5\nI0506 19:55:10.538748 336 log.go:172] (0xc000b8b4a0) Reply frame received for 5\nI0506 19:55:10.592328 336 log.go:172] (0xc000b8b4a0) Data frame received for 5\nI0506 19:55:10.592376 336 log.go:172] (0xc00084abe0) (5) Data frame handling\nI0506 19:55:10.592401 336 log.go:172] (0xc00084abe0) (5) Data frame sent\nI0506 19:55:10.592418 336 log.go:172] (0xc000b8b4a0) Data frame received for 5\nI0506 19:55:10.592432 336 log.go:172] (0xc00084abe0) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.148.190 80\nConnection to 10.104.148.190 80 port [tcp/http] succeeded!\nI0506 19:55:10.592469 336 log.go:172] (0xc000b8b4a0) Data frame received for 3\nI0506 19:55:10.592509 336 log.go:172] (0xc0005f7400) (3) Data frame handling\nI0506 19:55:10.594264 336 log.go:172] (0xc000b8b4a0) Data frame received for 1\nI0506 19:55:10.594298 336 log.go:172] (0xc000b7c3c0) (1) Data frame handling\nI0506 19:55:10.594318 336 log.go:172] (0xc000b7c3c0) (1) Data frame sent\nI0506 19:55:10.594342 336 log.go:172] (0xc000b8b4a0) (0xc000b7c3c0) Stream removed, broadcasting: 1\nI0506 19:55:10.594447 336 log.go:172] (0xc000b8b4a0) Go away received\nI0506 19:55:10.594947 336 log.go:172] (0xc000b8b4a0) (0xc000b7c3c0) Stream removed, broadcasting: 1\nI0506 19:55:10.594974 336 log.go:172] (0xc000b8b4a0) (0xc0005f7400) Stream removed, broadcasting: 3\nI0506 19:55:10.594986 336 log.go:172] (0xc000b8b4a0) (0xc00084abe0) Stream removed, broadcasting: 5\n" May 6 19:55:10.599: INFO: stdout: "" May 6 19:55:10.600: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6080 execpodzm76j -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32581' May 6 19:55:10.794: INFO: stderr: "I0506 19:55:10.717565 356 log.go:172] (0xc0009db1e0) (0xc000b18500) Create stream\nI0506 19:55:10.717628 356 log.go:172] (0xc0009db1e0) (0xc000b18500) Stream added, broadcasting: 1\nI0506 19:55:10.723091 356 log.go:172] (0xc0009db1e0) Reply frame received for 1\nI0506 19:55:10.723143 356 log.go:172] (0xc0009db1e0) (0xc0004ec5a0) Create stream\nI0506 19:55:10.723164 356 log.go:172] (0xc0009db1e0) (0xc0004ec5a0) Stream added, broadcasting: 3\nI0506 19:55:10.724091 356 log.go:172] (0xc0009db1e0) Reply frame received for 3\nI0506 19:55:10.724122 356 log.go:172] (0xc0009db1e0) (0xc0002bedc0) Create stream\nI0506 19:55:10.724132 356 log.go:172] (0xc0009db1e0) (0xc0002bedc0) Stream added, broadcasting: 5\nI0506 19:55:10.725236 356 log.go:172] (0xc0009db1e0) Reply frame received for 5\nI0506 19:55:10.786178 356 log.go:172] (0xc0009db1e0) Data frame received for 5\nI0506 19:55:10.786317 356 log.go:172] (0xc0002bedc0) (5) Data frame handling\nI0506 19:55:10.786420 356 log.go:172] (0xc0002bedc0) (5) Data frame sent\nI0506 19:55:10.786455 356 log.go:172] (0xc0009db1e0) Data frame received for 5\nI0506 19:55:10.786473 356 log.go:172] (0xc0002bedc0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 32581\nConnection to 172.17.0.13 32581 port [tcp/32581] succeeded!\nI0506 19:55:10.786511 356 log.go:172] (0xc0002bedc0) (5) Data frame sent\nI0506 19:55:10.787091 356 log.go:172] (0xc0009db1e0) Data frame received for 5\nI0506 19:55:10.787127 356 log.go:172] (0xc0009db1e0) Data frame received for 3\nI0506 19:55:10.787157 356 log.go:172] (0xc0004ec5a0) (3) Data frame handling\nI0506 19:55:10.787180 356 log.go:172] (0xc0002bedc0) (5) Data frame handling\nI0506 19:55:10.788953 356 log.go:172] (0xc0009db1e0) Data frame received for 1\nI0506 19:55:10.788973 356 log.go:172] (0xc000b18500) (1) Data frame handling\nI0506 19:55:10.788981 356 log.go:172] (0xc000b18500) (1) Data frame sent\nI0506 19:55:10.788991 356 log.go:172] (0xc0009db1e0) (0xc000b18500) Stream removed, broadcasting: 1\nI0506 19:55:10.789096 356 log.go:172] (0xc0009db1e0) Go away received\nI0506 19:55:10.789340 356 log.go:172] (0xc0009db1e0) (0xc000b18500) Stream removed, broadcasting: 1\nI0506 19:55:10.789352 356 log.go:172] (0xc0009db1e0) (0xc0004ec5a0) Stream removed, broadcasting: 3\nI0506 19:55:10.789358 356 log.go:172] (0xc0009db1e0) (0xc0002bedc0) Stream removed, broadcasting: 5\n" May 6 19:55:10.794: INFO: stdout: "" May 6 19:55:10.794: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6080 execpodzm76j -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32581' May 6 19:55:12.212: INFO: stderr: "I0506 19:55:12.133920 376 log.go:172] (0xc000bd91e0) (0xc000b5c5a0) Create stream\nI0506 19:55:12.134020 376 log.go:172] (0xc000bd91e0) (0xc000b5c5a0) Stream added, broadcasting: 1\nI0506 19:55:12.144029 376 log.go:172] (0xc000bd91e0) Reply frame received for 1\nI0506 19:55:12.144078 376 log.go:172] (0xc000bd91e0) (0xc000836a00) Create stream\nI0506 19:55:12.144090 376 log.go:172] (0xc000bd91e0) (0xc000836a00) Stream added, broadcasting: 3\nI0506 19:55:12.144940 376 log.go:172] (0xc000bd91e0) Reply frame received for 3\nI0506 19:55:12.144967 376 log.go:172] (0xc000bd91e0) (0xc000836f00) Create stream\nI0506 19:55:12.144975 376 log.go:172] (0xc000bd91e0) (0xc000836f00) Stream added, broadcasting: 5\nI0506 19:55:12.146191 376 log.go:172] (0xc000bd91e0) Reply frame received for 5\nI0506 19:55:12.206807 376 log.go:172] (0xc000bd91e0) Data frame received for 5\nI0506 19:55:12.206835 376 log.go:172] (0xc000836f00) (5) Data frame handling\nI0506 19:55:12.206846 376 log.go:172] (0xc000836f00) (5) Data frame sent\nI0506 19:55:12.206852 376 log.go:172] (0xc000bd91e0) Data frame received for 5\nI0506 19:55:12.206856 376 log.go:172] (0xc000836f00) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32581\nConnection to 172.17.0.12 32581 port [tcp/32581] succeeded!\nI0506 19:55:12.206874 376 log.go:172] (0xc000bd91e0) Data frame received for 3\nI0506 19:55:12.206880 376 log.go:172] (0xc000836a00) (3) Data frame handling\nI0506 19:55:12.207931 376 log.go:172] (0xc000bd91e0) Data frame received for 1\nI0506 19:55:12.207954 376 log.go:172] (0xc000b5c5a0) (1) Data frame handling\nI0506 19:55:12.207964 376 log.go:172] (0xc000b5c5a0) (1) Data frame sent\nI0506 19:55:12.207973 376 log.go:172] (0xc000bd91e0) (0xc000b5c5a0) Stream removed, broadcasting: 1\nI0506 19:55:12.207988 376 log.go:172] (0xc000bd91e0) Go away received\nI0506 19:55:12.208301 376 log.go:172] (0xc000bd91e0) (0xc000b5c5a0) Stream removed, broadcasting: 1\nI0506 19:55:12.208322 376 log.go:172] (0xc000bd91e0) (0xc000836a00) Stream removed, broadcasting: 3\nI0506 19:55:12.208330 376 log.go:172] (0xc000bd91e0) (0xc000836f00) Stream removed, broadcasting: 5\n" May 6 19:55:12.212: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:55:12.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6080" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:21.841 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":288,"completed":26,"skipped":357,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:55:12.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 19:55:12.747: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 6 19:55:15.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6408 create -f -' May 6 19:55:20.191: INFO: stderr: "" May 6 19:55:20.191: INFO: stdout: "e2e-test-crd-publish-openapi-1199-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 6 19:55:20.191: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6408 delete e2e-test-crd-publish-openapi-1199-crds test-cr' May 6 19:55:20.304: INFO: stderr: "" May 6 19:55:20.304: INFO: stdout: "e2e-test-crd-publish-openapi-1199-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 6 19:55:20.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6408 apply -f -' May 6 19:55:20.571: INFO: stderr: "" May 6 19:55:20.571: INFO: stdout: "e2e-test-crd-publish-openapi-1199-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 6 19:55:20.571: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6408 delete e2e-test-crd-publish-openapi-1199-crds test-cr' May 6 19:55:21.073: INFO: stderr: "" May 6 19:55:21.073: INFO: stdout: "e2e-test-crd-publish-openapi-1199-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 6 19:55:21.073: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1199-crds' May 6 19:55:21.746: INFO: stderr: "" May 6 19:55:21.746: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1199-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:55:24.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6408" for this suite. • [SLOW TEST:12.256 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":288,"completed":27,"skipped":362,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:55:24.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args May 6 19:55:24.812: INFO: Waiting up to 5m0s for pod "var-expansion-adf4cf56-040e-4886-92d2-042e4e97c6c0" in namespace "var-expansion-9297" to be "Succeeded or Failed" May 6 19:55:24.834: INFO: Pod "var-expansion-adf4cf56-040e-4886-92d2-042e4e97c6c0": Phase="Pending", Reason="", readiness=false. Elapsed: 22.551955ms May 6 19:55:26.837: INFO: Pod "var-expansion-adf4cf56-040e-4886-92d2-042e4e97c6c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02595555s May 6 19:55:28.841: INFO: Pod "var-expansion-adf4cf56-040e-4886-92d2-042e4e97c6c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029704925s STEP: Saw pod success May 6 19:55:28.841: INFO: Pod "var-expansion-adf4cf56-040e-4886-92d2-042e4e97c6c0" satisfied condition "Succeeded or Failed" May 6 19:55:28.844: INFO: Trying to get logs from node latest-worker pod var-expansion-adf4cf56-040e-4886-92d2-042e4e97c6c0 container dapi-container: STEP: delete the pod May 6 19:55:28.923: INFO: Waiting for pod var-expansion-adf4cf56-040e-4886-92d2-042e4e97c6c0 to disappear May 6 19:55:28.934: INFO: Pod var-expansion-adf4cf56-040e-4886-92d2-042e4e97c6c0 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:55:28.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9297" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":288,"completed":28,"skipped":369,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:55:28.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:55:40.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4688" for this suite. • [SLOW TEST:11.697 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":288,"completed":29,"skipped":391,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:55:40.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-0548c5a3-80f3-4154-ad61-299f3678cf96 STEP: Creating a pod to test consume secrets May 6 19:55:41.199: INFO: Waiting up to 5m0s for pod "pod-secrets-bf6aae3d-2b4a-4f62-a0d5-fd0743c14d06" in namespace "secrets-5691" to be "Succeeded or Failed" May 6 19:55:41.382: INFO: Pod "pod-secrets-bf6aae3d-2b4a-4f62-a0d5-fd0743c14d06": Phase="Pending", Reason="", readiness=false. Elapsed: 183.083732ms May 6 19:55:43.460: INFO: Pod "pod-secrets-bf6aae3d-2b4a-4f62-a0d5-fd0743c14d06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260702195s May 6 19:55:45.464: INFO: Pod "pod-secrets-bf6aae3d-2b4a-4f62-a0d5-fd0743c14d06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.264819204s May 6 19:55:47.550: INFO: Pod "pod-secrets-bf6aae3d-2b4a-4f62-a0d5-fd0743c14d06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.351232222s STEP: Saw pod success May 6 19:55:47.551: INFO: Pod "pod-secrets-bf6aae3d-2b4a-4f62-a0d5-fd0743c14d06" satisfied condition "Succeeded or Failed" May 6 19:55:47.554: INFO: Trying to get logs from node latest-worker pod pod-secrets-bf6aae3d-2b4a-4f62-a0d5-fd0743c14d06 container secret-volume-test: STEP: delete the pod May 6 19:55:47.822: INFO: Waiting for pod pod-secrets-bf6aae3d-2b4a-4f62-a0d5-fd0743c14d06 to disappear May 6 19:55:47.840: INFO: Pod pod-secrets-bf6aae3d-2b4a-4f62-a0d5-fd0743c14d06 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:55:47.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5691" for this suite. • [SLOW TEST:7.176 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":30,"skipped":421,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:55:47.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-ad1de157-1895-48aa-98dd-0a98b0b59464 STEP: Creating a pod to test consume secrets May 6 19:55:48.041: INFO: Waiting up to 5m0s for pod "pod-secrets-c6540eec-5991-4e5d-8ec4-04a25bad5c26" in namespace "secrets-4487" to be "Succeeded or Failed" May 6 19:55:48.059: INFO: Pod "pod-secrets-c6540eec-5991-4e5d-8ec4-04a25bad5c26": Phase="Pending", Reason="", readiness=false. Elapsed: 18.529491ms May 6 19:55:50.232: INFO: Pod "pod-secrets-c6540eec-5991-4e5d-8ec4-04a25bad5c26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190992898s May 6 19:55:52.235: INFO: Pod "pod-secrets-c6540eec-5991-4e5d-8ec4-04a25bad5c26": Phase="Running", Reason="", readiness=true. Elapsed: 4.194167046s May 6 19:55:54.239: INFO: Pod "pod-secrets-c6540eec-5991-4e5d-8ec4-04a25bad5c26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.198760513s STEP: Saw pod success May 6 19:55:54.239: INFO: Pod "pod-secrets-c6540eec-5991-4e5d-8ec4-04a25bad5c26" satisfied condition "Succeeded or Failed" May 6 19:55:54.243: INFO: Trying to get logs from node latest-worker pod pod-secrets-c6540eec-5991-4e5d-8ec4-04a25bad5c26 container secret-volume-test: STEP: delete the pod May 6 19:55:54.280: INFO: Waiting for pod pod-secrets-c6540eec-5991-4e5d-8ec4-04a25bad5c26 to disappear May 6 19:55:54.310: INFO: Pod pod-secrets-c6540eec-5991-4e5d-8ec4-04a25bad5c26 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:55:54.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4487" for this suite. • [SLOW TEST:6.470 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":31,"skipped":441,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:55:54.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:56:05.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-113" for this suite. • [SLOW TEST:11.621 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":288,"completed":32,"skipped":484,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:56:05.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3464 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-3464 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3464 May 6 19:56:06.081: INFO: Found 0 stateful pods, waiting for 1 May 6 19:56:16.095: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 6 19:56:16.098: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3464 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 19:56:16.433: INFO: stderr: "I0506 19:56:16.239973 505 log.go:172] (0xc000b00a50) (0xc0004d7f40) Create stream\nI0506 19:56:16.240021 505 log.go:172] (0xc000b00a50) (0xc0004d7f40) Stream added, broadcasting: 1\nI0506 19:56:16.242038 505 log.go:172] (0xc000b00a50) Reply frame received for 1\nI0506 19:56:16.242071 505 log.go:172] (0xc000b00a50) (0xc00039e500) Create stream\nI0506 19:56:16.242082 505 log.go:172] (0xc000b00a50) (0xc00039e500) Stream added, broadcasting: 3\nI0506 19:56:16.242816 505 log.go:172] (0xc000b00a50) Reply frame received for 3\nI0506 19:56:16.242840 505 log.go:172] (0xc000b00a50) (0xc000304e60) Create stream\nI0506 19:56:16.242848 505 log.go:172] (0xc000b00a50) (0xc000304e60) Stream added, broadcasting: 5\nI0506 19:56:16.243499 505 log.go:172] (0xc000b00a50) Reply frame received for 5\nI0506 19:56:16.324948 505 log.go:172] (0xc000b00a50) Data frame received for 5\nI0506 19:56:16.324968 505 log.go:172] (0xc000304e60) (5) Data frame handling\nI0506 19:56:16.324980 505 log.go:172] (0xc000304e60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 19:56:16.425934 505 log.go:172] (0xc000b00a50) Data frame received for 3\nI0506 19:56:16.425967 505 log.go:172] (0xc00039e500) (3) Data frame handling\nI0506 19:56:16.425982 505 log.go:172] (0xc00039e500) (3) Data frame sent\nI0506 19:56:16.426948 505 log.go:172] (0xc000b00a50) Data frame received for 5\nI0506 19:56:16.426970 505 log.go:172] (0xc000304e60) (5) Data frame handling\nI0506 19:56:16.427028 505 log.go:172] (0xc000b00a50) Data frame received for 3\nI0506 19:56:16.427053 505 log.go:172] (0xc00039e500) (3) Data frame handling\nI0506 19:56:16.428196 505 log.go:172] (0xc000b00a50) Data frame received for 1\nI0506 19:56:16.428215 505 log.go:172] (0xc0004d7f40) (1) Data frame handling\nI0506 19:56:16.428234 505 log.go:172] (0xc0004d7f40) (1) Data frame sent\nI0506 19:56:16.428346 505 log.go:172] (0xc000b00a50) (0xc0004d7f40) Stream removed, broadcasting: 1\nI0506 19:56:16.428471 505 log.go:172] (0xc000b00a50) Go away received\nI0506 19:56:16.429005 505 log.go:172] (0xc000b00a50) (0xc0004d7f40) Stream removed, broadcasting: 1\nI0506 19:56:16.429023 505 log.go:172] (0xc000b00a50) (0xc00039e500) Stream removed, broadcasting: 3\nI0506 19:56:16.429031 505 log.go:172] (0xc000b00a50) (0xc000304e60) Stream removed, broadcasting: 5\n" May 6 19:56:16.433: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 19:56:16.433: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 19:56:16.437: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 6 19:56:26.441: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 19:56:26.441: INFO: Waiting for statefulset status.replicas updated to 0 May 6 19:56:26.922: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999421s May 6 19:56:27.927: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.835421732s May 6 19:56:28.932: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.83084981s May 6 19:56:29.937: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.825844183s May 6 19:56:30.942: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.820710705s May 6 19:56:31.947: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.815864982s May 6 19:56:33.048: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.81090662s May 6 19:56:34.052: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.709982807s May 6 19:56:35.057: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.705767388s May 6 19:56:36.062: INFO: Verifying statefulset ss doesn't scale past 1 for another 700.657688ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3464 May 6 19:56:37.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3464 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 19:56:37.312: INFO: stderr: "I0506 19:56:37.215576 525 log.go:172] (0xc0009be000) (0xc00070ee60) Create stream\nI0506 19:56:37.215624 525 log.go:172] (0xc0009be000) (0xc00070ee60) Stream added, broadcasting: 1\nI0506 19:56:37.217089 525 log.go:172] (0xc0009be000) Reply frame received for 1\nI0506 19:56:37.217293 525 log.go:172] (0xc0009be000) (0xc0006e0c80) Create stream\nI0506 19:56:37.217313 525 log.go:172] (0xc0009be000) (0xc0006e0c80) Stream added, broadcasting: 3\nI0506 19:56:37.218068 525 log.go:172] (0xc0009be000) Reply frame received for 3\nI0506 19:56:37.218092 525 log.go:172] (0xc0009be000) (0xc0005ba1e0) Create stream\nI0506 19:56:37.218100 525 log.go:172] (0xc0009be000) (0xc0005ba1e0) Stream added, broadcasting: 5\nI0506 19:56:37.218841 525 log.go:172] (0xc0009be000) Reply frame received for 5\nI0506 19:56:37.291921 525 log.go:172] (0xc0009be000) Data frame received for 5\nI0506 19:56:37.291969 525 log.go:172] (0xc0005ba1e0) (5) Data frame handling\nI0506 19:56:37.291999 525 log.go:172] (0xc0005ba1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 19:56:37.302814 525 log.go:172] (0xc0009be000) Data frame received for 3\nI0506 19:56:37.302839 525 log.go:172] (0xc0006e0c80) (3) Data frame handling\nI0506 19:56:37.302857 525 log.go:172] (0xc0006e0c80) (3) Data frame sent\nI0506 19:56:37.302865 525 log.go:172] (0xc0009be000) Data frame received for 3\nI0506 19:56:37.302872 525 log.go:172] (0xc0006e0c80) (3) Data frame handling\nI0506 19:56:37.302916 525 log.go:172] (0xc0009be000) Data frame received for 5\nI0506 19:56:37.302947 525 log.go:172] (0xc0005ba1e0) (5) Data frame handling\nI0506 19:56:37.306093 525 log.go:172] (0xc0009be000) Data frame received for 1\nI0506 19:56:37.306129 525 log.go:172] (0xc00070ee60) (1) Data frame handling\nI0506 19:56:37.306162 525 log.go:172] (0xc00070ee60) (1) Data frame sent\nI0506 19:56:37.306190 525 log.go:172] (0xc0009be000) (0xc00070ee60) Stream removed, broadcasting: 1\nI0506 19:56:37.306228 525 log.go:172] (0xc0009be000) Go away received\nI0506 19:56:37.306641 525 log.go:172] (0xc0009be000) (0xc00070ee60) Stream removed, broadcasting: 1\nI0506 19:56:37.306663 525 log.go:172] (0xc0009be000) (0xc0006e0c80) Stream removed, broadcasting: 3\nI0506 19:56:37.306673 525 log.go:172] (0xc0009be000) (0xc0005ba1e0) Stream removed, broadcasting: 5\n" May 6 19:56:37.312: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 19:56:37.312: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 19:56:37.321: INFO: Found 1 stateful pods, waiting for 3 May 6 19:56:47.328: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 6 19:56:47.328: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 6 19:56:47.328: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 6 19:56:47.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3464 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 19:56:47.747: INFO: stderr: "I0506 19:56:47.461765 546 log.go:172] (0xc000aad1e0) (0xc000b04640) Create stream\nI0506 19:56:47.461864 546 log.go:172] (0xc000aad1e0) (0xc000b04640) Stream added, broadcasting: 1\nI0506 19:56:47.464763 546 log.go:172] (0xc000aad1e0) Reply frame received for 1\nI0506 19:56:47.464795 546 log.go:172] (0xc000aad1e0) (0xc000500320) Create stream\nI0506 19:56:47.464803 546 log.go:172] (0xc000aad1e0) (0xc000500320) Stream added, broadcasting: 3\nI0506 19:56:47.465669 546 log.go:172] (0xc000aad1e0) Reply frame received for 3\nI0506 19:56:47.465700 546 log.go:172] (0xc000aad1e0) (0xc0004d4e60) Create stream\nI0506 19:56:47.465711 546 log.go:172] (0xc000aad1e0) (0xc0004d4e60) Stream added, broadcasting: 5\nI0506 19:56:47.466453 546 log.go:172] (0xc000aad1e0) Reply frame received for 5\nI0506 19:56:47.529563 546 log.go:172] (0xc000aad1e0) Data frame received for 5\nI0506 19:56:47.529581 546 log.go:172] (0xc0004d4e60) (5) Data frame handling\nI0506 19:56:47.529592 546 log.go:172] (0xc0004d4e60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 19:56:47.739350 546 log.go:172] (0xc000aad1e0) Data frame received for 5\nI0506 19:56:47.739400 546 log.go:172] (0xc0004d4e60) (5) Data frame handling\nI0506 19:56:47.739434 546 log.go:172] (0xc000aad1e0) Data frame received for 3\nI0506 19:56:47.739452 546 log.go:172] (0xc000500320) (3) Data frame handling\nI0506 19:56:47.739482 546 log.go:172] (0xc000500320) (3) Data frame sent\nI0506 19:56:47.739504 546 log.go:172] (0xc000aad1e0) Data frame received for 3\nI0506 19:56:47.739514 546 log.go:172] (0xc000500320) (3) Data frame handling\nI0506 19:56:47.741588 546 log.go:172] (0xc000aad1e0) Data frame received for 1\nI0506 19:56:47.741616 546 log.go:172] (0xc000b04640) (1) Data frame handling\nI0506 19:56:47.741635 546 log.go:172] (0xc000b04640) (1) Data frame sent\nI0506 19:56:47.741649 546 log.go:172] (0xc000aad1e0) (0xc000b04640) Stream removed, broadcasting: 1\nI0506 19:56:47.741662 546 log.go:172] (0xc000aad1e0) Go away received\nI0506 19:56:47.742109 546 log.go:172] (0xc000aad1e0) (0xc000b04640) Stream removed, broadcasting: 1\nI0506 19:56:47.742141 546 log.go:172] (0xc000aad1e0) (0xc000500320) Stream removed, broadcasting: 3\nI0506 19:56:47.742161 546 log.go:172] (0xc000aad1e0) (0xc0004d4e60) Stream removed, broadcasting: 5\n" May 6 19:56:47.748: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 19:56:47.748: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 19:56:47.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3464 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 19:56:48.212: INFO: stderr: "I0506 19:56:48.014023 565 log.go:172] (0xc00003ab00) (0xc0000dc780) Create stream\nI0506 19:56:48.014327 565 log.go:172] (0xc00003ab00) (0xc0000dc780) Stream added, broadcasting: 1\nI0506 19:56:48.016722 565 log.go:172] (0xc00003ab00) Reply frame received for 1\nI0506 19:56:48.016768 565 log.go:172] (0xc00003ab00) (0xc000151680) Create stream\nI0506 19:56:48.016783 565 log.go:172] (0xc00003ab00) (0xc000151680) Stream added, broadcasting: 3\nI0506 19:56:48.018048 565 log.go:172] (0xc00003ab00) Reply frame received for 3\nI0506 19:56:48.018084 565 log.go:172] (0xc00003ab00) (0xc0000dcdc0) Create stream\nI0506 19:56:48.018095 565 log.go:172] (0xc00003ab00) (0xc0000dcdc0) Stream added, broadcasting: 5\nI0506 19:56:48.018993 565 log.go:172] (0xc00003ab00) Reply frame received for 5\nI0506 19:56:48.092522 565 log.go:172] (0xc00003ab00) Data frame received for 5\nI0506 19:56:48.092561 565 log.go:172] (0xc0000dcdc0) (5) Data frame handling\nI0506 19:56:48.092582 565 log.go:172] (0xc0000dcdc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 19:56:48.203298 565 log.go:172] (0xc00003ab00) Data frame received for 3\nI0506 19:56:48.203347 565 log.go:172] (0xc000151680) (3) Data frame handling\nI0506 19:56:48.203381 565 log.go:172] (0xc000151680) (3) Data frame sent\nI0506 19:56:48.203452 565 log.go:172] (0xc00003ab00) Data frame received for 3\nI0506 19:56:48.203479 565 log.go:172] (0xc000151680) (3) Data frame handling\nI0506 19:56:48.203667 565 log.go:172] (0xc00003ab00) Data frame received for 5\nI0506 19:56:48.203705 565 log.go:172] (0xc0000dcdc0) (5) Data frame handling\nI0506 19:56:48.205842 565 log.go:172] (0xc00003ab00) Data frame received for 1\nI0506 19:56:48.205878 565 log.go:172] (0xc0000dc780) (1) Data frame handling\nI0506 19:56:48.205912 565 log.go:172] (0xc0000dc780) (1) Data frame sent\nI0506 19:56:48.205945 565 log.go:172] (0xc00003ab00) (0xc0000dc780) Stream removed, broadcasting: 1\nI0506 19:56:48.206040 565 log.go:172] (0xc00003ab00) Go away received\nI0506 19:56:48.206483 565 log.go:172] (0xc00003ab00) (0xc0000dc780) Stream removed, broadcasting: 1\nI0506 19:56:48.206511 565 log.go:172] (0xc00003ab00) (0xc000151680) Stream removed, broadcasting: 3\nI0506 19:56:48.206524 565 log.go:172] (0xc00003ab00) (0xc0000dcdc0) Stream removed, broadcasting: 5\n" May 6 19:56:48.212: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 19:56:48.212: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 19:56:48.212: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3464 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 19:56:48.559: INFO: stderr: "I0506 19:56:48.392047 585 log.go:172] (0xc000a95600) (0xc0006cdb80) Create stream\nI0506 19:56:48.392111 585 log.go:172] (0xc000a95600) (0xc0006cdb80) Stream added, broadcasting: 1\nI0506 19:56:48.398295 585 log.go:172] (0xc000a95600) Reply frame received for 1\nI0506 19:56:48.398361 585 log.go:172] (0xc000a95600) (0xc0005aea00) Create stream\nI0506 19:56:48.398381 585 log.go:172] (0xc000a95600) (0xc0005aea00) Stream added, broadcasting: 3\nI0506 19:56:48.400911 585 log.go:172] (0xc000a95600) Reply frame received for 3\nI0506 19:56:48.400943 585 log.go:172] (0xc000a95600) (0xc0005aef00) Create stream\nI0506 19:56:48.400953 585 log.go:172] (0xc000a95600) (0xc0005aef00) Stream added, broadcasting: 5\nI0506 19:56:48.402010 585 log.go:172] (0xc000a95600) Reply frame received for 5\nI0506 19:56:48.470251 585 log.go:172] (0xc000a95600) Data frame received for 5\nI0506 19:56:48.470285 585 log.go:172] (0xc0005aef00) (5) Data frame handling\nI0506 19:56:48.470309 585 log.go:172] (0xc0005aef00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 19:56:48.552230 585 log.go:172] (0xc000a95600) Data frame received for 3\nI0506 19:56:48.552261 585 log.go:172] (0xc0005aea00) (3) Data frame handling\nI0506 19:56:48.552283 585 log.go:172] (0xc0005aea00) (3) Data frame sent\nI0506 19:56:48.552296 585 log.go:172] (0xc000a95600) Data frame received for 3\nI0506 19:56:48.552309 585 log.go:172] (0xc0005aea00) (3) Data frame handling\nI0506 19:56:48.552433 585 log.go:172] (0xc000a95600) Data frame received for 5\nI0506 19:56:48.552449 585 log.go:172] (0xc0005aef00) (5) Data frame handling\nI0506 19:56:48.554627 585 log.go:172] (0xc000a95600) Data frame received for 1\nI0506 19:56:48.554650 585 log.go:172] (0xc0006cdb80) (1) Data frame handling\nI0506 19:56:48.554662 585 log.go:172] (0xc0006cdb80) (1) Data frame sent\nI0506 19:56:48.554678 585 log.go:172] (0xc000a95600) (0xc0006cdb80) Stream removed, broadcasting: 1\nI0506 19:56:48.554959 585 log.go:172] (0xc000a95600) (0xc0006cdb80) Stream removed, broadcasting: 1\nI0506 19:56:48.554975 585 log.go:172] (0xc000a95600) (0xc0005aea00) Stream removed, broadcasting: 3\nI0506 19:56:48.554983 585 log.go:172] (0xc000a95600) (0xc0005aef00) Stream removed, broadcasting: 5\n" May 6 19:56:48.559: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 19:56:48.559: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 19:56:48.559: INFO: Waiting for statefulset status.replicas updated to 0 May 6 19:56:48.578: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 6 19:56:58.597: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 19:56:58.597: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 6 19:56:58.597: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 6 19:56:58.623: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999442s May 6 19:56:59.628: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.980127993s May 6 19:57:00.633: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.975487507s May 6 19:57:01.743: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.97007843s May 6 19:57:02.748: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.860074459s May 6 19:57:03.754: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.854986173s May 6 19:57:04.759: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.849347256s May 6 19:57:05.765: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.843905381s May 6 19:57:06.771: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.83802686s May 6 19:57:07.776: INFO: Verifying statefulset ss doesn't scale past 3 for another 832.423152ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3464 May 6 19:57:08.782: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3464 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 19:57:09.030: INFO: stderr: "I0506 19:57:08.939929 604 log.go:172] (0xc000a61970) (0xc00037a320) Create stream\nI0506 19:57:08.940005 604 log.go:172] (0xc000a61970) (0xc00037a320) Stream added, broadcasting: 1\nI0506 19:57:08.942813 604 log.go:172] (0xc000a61970) Reply frame received for 1\nI0506 19:57:08.942864 604 log.go:172] (0xc000a61970) (0xc0004c8140) Create stream\nI0506 19:57:08.942876 604 log.go:172] (0xc000a61970) (0xc0004c8140) Stream added, broadcasting: 3\nI0506 19:57:08.943956 604 log.go:172] (0xc000a61970) Reply frame received for 3\nI0506 19:57:08.944012 604 log.go:172] (0xc000a61970) (0xc00037a960) Create stream\nI0506 19:57:08.944031 604 log.go:172] (0xc000a61970) (0xc00037a960) Stream added, broadcasting: 5\nI0506 19:57:08.944840 604 log.go:172] (0xc000a61970) Reply frame received for 5\nI0506 19:57:09.022629 604 log.go:172] (0xc000a61970) Data frame received for 3\nI0506 19:57:09.022669 604 log.go:172] (0xc000a61970) Data frame received for 5\nI0506 19:57:09.022695 604 log.go:172] (0xc00037a960) (5) Data frame handling\nI0506 19:57:09.022708 604 log.go:172] (0xc00037a960) (5) Data frame sent\nI0506 19:57:09.022717 604 log.go:172] (0xc000a61970) Data frame received for 5\nI0506 19:57:09.022725 604 log.go:172] (0xc00037a960) (5) Data frame handling\nI0506 19:57:09.022748 604 log.go:172] (0xc0004c8140) (3) Data frame handling\nI0506 19:57:09.022762 604 log.go:172] (0xc0004c8140) (3) Data frame sent\nI0506 19:57:09.022773 604 log.go:172] (0xc000a61970) Data frame received for 3\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 19:57:09.022783 604 log.go:172] (0xc0004c8140) (3) Data frame handling\nI0506 19:57:09.024053 604 log.go:172] (0xc000a61970) Data frame received for 1\nI0506 19:57:09.024083 604 log.go:172] (0xc00037a320) (1) Data frame handling\nI0506 19:57:09.024096 604 log.go:172] (0xc00037a320) (1) Data frame sent\nI0506 19:57:09.024110 604 log.go:172] (0xc000a61970) (0xc00037a320) Stream removed, broadcasting: 1\nI0506 19:57:09.024138 604 log.go:172] (0xc000a61970) Go away received\nI0506 19:57:09.024547 604 log.go:172] (0xc000a61970) (0xc00037a320) Stream removed, broadcasting: 1\nI0506 19:57:09.024572 604 log.go:172] (0xc000a61970) (0xc0004c8140) Stream removed, broadcasting: 3\nI0506 19:57:09.024583 604 log.go:172] (0xc000a61970) (0xc00037a960) Stream removed, broadcasting: 5\n" May 6 19:57:09.030: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 19:57:09.030: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 19:57:09.030: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3464 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 19:57:09.248: INFO: stderr: "I0506 19:57:09.171924 624 log.go:172] (0xc000bd31e0) (0xc00099c6e0) Create stream\nI0506 19:57:09.171988 624 log.go:172] (0xc000bd31e0) (0xc00099c6e0) Stream added, broadcasting: 1\nI0506 19:57:09.177593 624 log.go:172] (0xc000bd31e0) Reply frame received for 1\nI0506 19:57:09.177637 624 log.go:172] (0xc000bd31e0) (0xc000448e60) Create stream\nI0506 19:57:09.177647 624 log.go:172] (0xc000bd31e0) (0xc000448e60) Stream added, broadcasting: 3\nI0506 19:57:09.178418 624 log.go:172] (0xc000bd31e0) Reply frame received for 3\nI0506 19:57:09.178444 624 log.go:172] (0xc000bd31e0) (0xc0006368c0) Create stream\nI0506 19:57:09.178451 624 log.go:172] (0xc000bd31e0) (0xc0006368c0) Stream added, broadcasting: 5\nI0506 19:57:09.179407 624 log.go:172] (0xc000bd31e0) Reply frame received for 5\nI0506 19:57:09.243234 624 log.go:172] (0xc000bd31e0) Data frame received for 5\nI0506 19:57:09.243264 624 log.go:172] (0xc0006368c0) (5) Data frame handling\nI0506 19:57:09.243273 624 log.go:172] (0xc0006368c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 19:57:09.243284 624 log.go:172] (0xc000bd31e0) Data frame received for 3\nI0506 19:57:09.243289 624 log.go:172] (0xc000448e60) (3) Data frame handling\nI0506 19:57:09.243294 624 log.go:172] (0xc000448e60) (3) Data frame sent\nI0506 19:57:09.243299 624 log.go:172] (0xc000bd31e0) Data frame received for 3\nI0506 19:57:09.243305 624 log.go:172] (0xc000448e60) (3) Data frame handling\nI0506 19:57:09.243633 624 log.go:172] (0xc000bd31e0) Data frame received for 5\nI0506 19:57:09.243668 624 log.go:172] (0xc0006368c0) (5) Data frame handling\nI0506 19:57:09.244750 624 log.go:172] (0xc000bd31e0) Data frame received for 1\nI0506 19:57:09.244787 624 log.go:172] (0xc00099c6e0) (1) Data frame handling\nI0506 19:57:09.244805 624 log.go:172] (0xc00099c6e0) (1) Data frame sent\nI0506 19:57:09.244816 624 log.go:172] (0xc000bd31e0) (0xc00099c6e0) Stream removed, broadcasting: 1\nI0506 19:57:09.244836 624 log.go:172] (0xc000bd31e0) Go away received\nI0506 19:57:09.245212 624 log.go:172] (0xc000bd31e0) (0xc00099c6e0) Stream removed, broadcasting: 1\nI0506 19:57:09.245274 624 log.go:172] (0xc000bd31e0) (0xc000448e60) Stream removed, broadcasting: 3\nI0506 19:57:09.245307 624 log.go:172] (0xc000bd31e0) (0xc0006368c0) Stream removed, broadcasting: 5\n" May 6 19:57:09.248: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 19:57:09.248: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 19:57:09.248: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3464 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 19:57:09.462: INFO: stderr: "I0506 19:57:09.380755 644 log.go:172] (0xc00003a420) (0xc000564640) Create stream\nI0506 19:57:09.380836 644 log.go:172] (0xc00003a420) (0xc000564640) Stream added, broadcasting: 1\nI0506 19:57:09.383427 644 log.go:172] (0xc00003a420) Reply frame received for 1\nI0506 19:57:09.383473 644 log.go:172] (0xc00003a420) (0xc0003094a0) Create stream\nI0506 19:57:09.383491 644 log.go:172] (0xc00003a420) (0xc0003094a0) Stream added, broadcasting: 3\nI0506 19:57:09.384267 644 log.go:172] (0xc00003a420) Reply frame received for 3\nI0506 19:57:09.384296 644 log.go:172] (0xc00003a420) (0xc00013b680) Create stream\nI0506 19:57:09.384304 644 log.go:172] (0xc00003a420) (0xc00013b680) Stream added, broadcasting: 5\nI0506 19:57:09.385079 644 log.go:172] (0xc00003a420) Reply frame received for 5\nI0506 19:57:09.454041 644 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 19:57:09.454090 644 log.go:172] (0xc00013b680) (5) Data frame handling\nI0506 19:57:09.454105 644 log.go:172] (0xc00013b680) (5) Data frame sent\nI0506 19:57:09.454116 644 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 19:57:09.454128 644 log.go:172] (0xc00013b680) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 19:57:09.454154 644 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 19:57:09.454193 644 log.go:172] (0xc0003094a0) (3) Data frame handling\nI0506 19:57:09.454221 644 log.go:172] (0xc0003094a0) (3) Data frame sent\nI0506 19:57:09.454240 644 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 19:57:09.454259 644 log.go:172] (0xc0003094a0) (3) Data frame handling\nI0506 19:57:09.456104 644 log.go:172] (0xc00003a420) Data frame received for 1\nI0506 19:57:09.456121 644 log.go:172] (0xc000564640) (1) Data frame handling\nI0506 19:57:09.456130 644 log.go:172] (0xc000564640) (1) Data frame sent\nI0506 19:57:09.456140 644 log.go:172] (0xc00003a420) (0xc000564640) Stream removed, broadcasting: 1\nI0506 19:57:09.456149 644 log.go:172] (0xc00003a420) Go away received\nI0506 19:57:09.456661 644 log.go:172] (0xc00003a420) (0xc000564640) Stream removed, broadcasting: 1\nI0506 19:57:09.456697 644 log.go:172] (0xc00003a420) (0xc0003094a0) Stream removed, broadcasting: 3\nI0506 19:57:09.456710 644 log.go:172] (0xc00003a420) (0xc00013b680) Stream removed, broadcasting: 5\n" May 6 19:57:09.462: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 19:57:09.462: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 19:57:09.462: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 6 19:57:29.484: INFO: Deleting all statefulset in ns statefulset-3464 May 6 19:57:29.487: INFO: Scaling statefulset ss to 0 May 6 19:57:29.496: INFO: Waiting for statefulset status.replicas updated to 0 May 6 19:57:29.499: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:57:29.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3464" for this suite. • [SLOW TEST:83.610 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":288,"completed":33,"skipped":502,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:57:29.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 6 19:57:29.628: INFO: Pod name pod-release: Found 0 pods out of 1 May 6 19:57:34.634: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:57:34.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1493" for this suite. • [SLOW TEST:5.255 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":288,"completed":34,"skipped":537,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:57:34.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 6 19:57:34.940: INFO: Waiting up to 5m0s for pod "downwardapi-volume-469a8599-8e7e-476b-bdfd-6fc9ee44288e" in namespace "projected-9066" to be "Succeeded or Failed" May 6 19:57:35.012: INFO: Pod "downwardapi-volume-469a8599-8e7e-476b-bdfd-6fc9ee44288e": Phase="Pending", Reason="", readiness=false. Elapsed: 71.991376ms May 6 19:57:37.016: INFO: Pod "downwardapi-volume-469a8599-8e7e-476b-bdfd-6fc9ee44288e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07604299s May 6 19:57:39.021: INFO: Pod "downwardapi-volume-469a8599-8e7e-476b-bdfd-6fc9ee44288e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081354123s STEP: Saw pod success May 6 19:57:39.021: INFO: Pod "downwardapi-volume-469a8599-8e7e-476b-bdfd-6fc9ee44288e" satisfied condition "Succeeded or Failed" May 6 19:57:39.024: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-469a8599-8e7e-476b-bdfd-6fc9ee44288e container client-container: STEP: delete the pod May 6 19:57:39.070: INFO: Waiting for pod downwardapi-volume-469a8599-8e7e-476b-bdfd-6fc9ee44288e to disappear May 6 19:57:39.077: INFO: Pod downwardapi-volume-469a8599-8e7e-476b-bdfd-6fc9ee44288e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:57:39.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9066" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":35,"skipped":590,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:57:39.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 19:57:39.959: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 19:57:42.012: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391859, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391859, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391860, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391859, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 19:57:45.111: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:57:46.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4625" for this suite. STEP: Destroying namespace "webhook-4625-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.065 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":288,"completed":36,"skipped":599,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:57:48.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 19:57:49.232: INFO: Creating deployment "webserver-deployment" May 6 19:57:49.295: INFO: Waiting for observed generation 1 May 6 19:57:51.669: INFO: Waiting for all required pods to come up May 6 19:57:51.798: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 6 19:58:06.660: INFO: Waiting for deployment "webserver-deployment" to complete May 6 19:58:06.667: INFO: Updating deployment "webserver-deployment" with a non-existent image May 6 19:58:06.673: INFO: Updating deployment webserver-deployment May 6 19:58:06.673: INFO: Waiting for observed generation 2 May 6 19:58:08.966: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 6 19:58:09.169: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 6 19:58:09.261: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 6 19:58:10.044: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 6 19:58:10.044: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 6 19:58:10.047: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 6 19:58:10.053: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 6 19:58:10.053: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 6 19:58:10.059: INFO: Updating deployment webserver-deployment May 6 19:58:10.059: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 6 19:58:10.666: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 6 19:58:10.890: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 6 19:58:11.800: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9548 /apis/apps/v1/namespaces/deployment-9548/deployments/webserver-deployment 63aee3e9-ad47-4b91-8f68-1939eb7ed4a0 2082273 3 2020-05-06 19:57:49 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-06 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-06 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0040b4c98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-06 19:58:09 +0000 UTC,LastTransitionTime:2020-05-06 19:57:49 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-06 19:58:10 +0000 UTC,LastTransitionTime:2020-05-06 19:58:10 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 6 19:58:12.085: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-9548 /apis/apps/v1/namespaces/deployment-9548/replicasets/webserver-deployment-6676bcd6d4 3865d854-58e9-4180-bfa8-c5d5bd6cf873 2082338 3 2020-05-06 19:58:06 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 63aee3e9-ad47-4b91-8f68-1939eb7ed4a0 0xc00442bd77 0xc00442bd78}] [] [{kube-controller-manager Update apps/v1 2020-05-06 19:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"63aee3e9-ad47-4b91-8f68-1939eb7ed4a0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00442bdf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 19:58:12.085: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 6 19:58:12.085: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-9548 /apis/apps/v1/namespaces/deployment-9548/replicasets/webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 2082327 3 2020-05-06 19:57:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 63aee3e9-ad47-4b91-8f68-1939eb7ed4a0 0xc00442be57 0xc00442be58}] [] [{kube-controller-manager Update apps/v1 2020-05-06 19:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"63aee3e9-ad47-4b91-8f68-1939eb7ed4a0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00442bed8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 6 19:58:12.267: INFO: Pod "webserver-deployment-6676bcd6d4-4f67l" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4f67l webserver-deployment-6676bcd6d4- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-6676bcd6d4-4f67l f2338d80-c9d8-4d86-bdf6-142ad64b8a53 2082249 0 2020-05-06 19:58:07 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 3865d854-58e9-4180-bfa8-c5d5bd6cf873 0xc0040b5107 0xc0040b5108}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3865d854-58e9-4180-bfa8-c5d5bd6cf873\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-06 19:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-06 19:58:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.267: INFO: Pod "webserver-deployment-6676bcd6d4-67p69" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-67p69 webserver-deployment-6676bcd6d4- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-6676bcd6d4-67p69 dafe3e73-78f7-4ec3-8246-f18f8776e993 2082299 0 2020-05-06 19:58:10 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 3865d854-58e9-4180-bfa8-c5d5bd6cf873 0xc0040b52b7 0xc0040b52b8}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3865d854-58e9-4180-bfa8-c5d5bd6cf873\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.268: INFO: Pod "webserver-deployment-6676bcd6d4-b7sxc" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-b7sxc webserver-deployment-6676bcd6d4- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-6676bcd6d4-b7sxc 5c807708-bfd0-4f6e-abdf-746550404e81 2082231 0 2020-05-06 19:58:06 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 3865d854-58e9-4180-bfa8-c5d5bd6cf873 0xc0040b53f7 0xc0040b53f8}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3865d854-58e9-4180-bfa8-c5d5bd6cf873\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-06 19:58:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-06 19:58:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.268: INFO: Pod "webserver-deployment-6676bcd6d4-fhgb5" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-fhgb5 webserver-deployment-6676bcd6d4- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-6676bcd6d4-fhgb5 19354d3d-dc82-4dba-b0fa-e1674edf5b75 2082251 0 2020-05-06 19:58:07 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 3865d854-58e9-4180-bfa8-c5d5bd6cf873 0xc0040b55a7 0xc0040b55a8}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3865d854-58e9-4180-bfa8-c5d5bd6cf873\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-06 19:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-06 19:58:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.268: INFO: Pod "webserver-deployment-6676bcd6d4-krnkk" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-krnkk webserver-deployment-6676bcd6d4- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-6676bcd6d4-krnkk ef5bad4a-f3ec-4831-899f-764bd736861c 2082322 0 2020-05-06 19:58:11 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 3865d854-58e9-4180-bfa8-c5d5bd6cf873 0xc0040b5767 0xc0040b5768}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3865d854-58e9-4180-bfa8-c5d5bd6cf873\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.268: INFO: Pod "webserver-deployment-6676bcd6d4-kxvk4" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-kxvk4 webserver-deployment-6676bcd6d4- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-6676bcd6d4-kxvk4 04d156ca-d490-43ca-9af0-7bd6c8027964 2082316 0 2020-05-06 19:58:11 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 3865d854-58e9-4180-bfa8-c5d5bd6cf873 0xc0040b58a7 0xc0040b58a8}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3865d854-58e9-4180-bfa8-c5d5bd6cf873\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.269: INFO: Pod "webserver-deployment-6676bcd6d4-l22sk" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-l22sk webserver-deployment-6676bcd6d4- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-6676bcd6d4-l22sk 075dd825-51ea-4b95-9fcc-76f9860c5d0d 2082270 0 2020-05-06 19:58:10 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 3865d854-58e9-4180-bfa8-c5d5bd6cf873 0xc0040b59e7 0xc0040b59e8}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3865d854-58e9-4180-bfa8-c5d5bd6cf873\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.269: INFO: Pod "webserver-deployment-6676bcd6d4-l9gvw" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-l9gvw webserver-deployment-6676bcd6d4- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-6676bcd6d4-l9gvw 41ab37d0-f039-4865-b62b-110d89c46c08 2082346 0 2020-05-06 19:58:06 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 3865d854-58e9-4180-bfa8-c5d5bd6cf873 0xc0040b5b27 0xc0040b5b28}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3865d854-58e9-4180-bfa8-c5d5bd6cf873\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-06 19:58:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.37\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.37,StartTime:2020-05-06 19:58:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.37,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.269: INFO: Pod "webserver-deployment-6676bcd6d4-pjxtk" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-pjxtk webserver-deployment-6676bcd6d4- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-6676bcd6d4-pjxtk a1377073-cb4c-4661-9ba5-3d9eb742ed1b 2082218 0 2020-05-06 19:58:06 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 3865d854-58e9-4180-bfa8-c5d5bd6cf873 0xc0040b5d07 0xc0040b5d08}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3865d854-58e9-4180-bfa8-c5d5bd6cf873\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-06 19:58:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-06 19:58:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.269: INFO: Pod "webserver-deployment-6676bcd6d4-q28hl" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-q28hl webserver-deployment-6676bcd6d4- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-6676bcd6d4-q28hl 68f3bbb0-f5bb-4c08-a18a-dd645afeb640 2082286 0 2020-05-06 19:58:10 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 3865d854-58e9-4180-bfa8-c5d5bd6cf873 0xc0040b5eb7 0xc0040b5eb8}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3865d854-58e9-4180-bfa8-c5d5bd6cf873\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.270: INFO: Pod "webserver-deployment-6676bcd6d4-swnsg" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-swnsg webserver-deployment-6676bcd6d4- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-6676bcd6d4-swnsg 3c618989-6ab8-40b0-be27-c854d3de410b 2082312 0 2020-05-06 19:58:11 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 3865d854-58e9-4180-bfa8-c5d5bd6cf873 0xc0040b5ff7 0xc0040b5ff8}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3865d854-58e9-4180-bfa8-c5d5bd6cf873\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.270: INFO: Pod "webserver-deployment-6676bcd6d4-tsvbt" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-tsvbt webserver-deployment-6676bcd6d4- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-6676bcd6d4-tsvbt c9e2682e-3221-4ebc-908e-3b57d6656112 2082313 0 2020-05-06 19:58:11 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 3865d854-58e9-4180-bfa8-c5d5bd6cf873 0xc0040d2137 0xc0040d2138}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3865d854-58e9-4180-bfa8-c5d5bd6cf873\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.270: INFO: Pod "webserver-deployment-6676bcd6d4-vnw29" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-vnw29 webserver-deployment-6676bcd6d4- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-6676bcd6d4-vnw29 aca78e60-2e56-4c5c-8612-62ba2f63433a 2082315 0 2020-05-06 19:58:11 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 3865d854-58e9-4180-bfa8-c5d5bd6cf873 0xc0040d2277 0xc0040d2278}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3865d854-58e9-4180-bfa8-c5d5bd6cf873\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.270: INFO: Pod "webserver-deployment-84855cf797-2xcxl" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-2xcxl webserver-deployment-84855cf797- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-84855cf797-2xcxl fddd9f66-35d1-4d9e-abec-bad772bcea21 2082157 0 2020-05-06 19:57:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 0xc0040d23b7 0xc0040d23b8}] [] [{kube-controller-manager Update v1 2020-05-06 19:57:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06950e98-c412-476c-bbcc-81d30b775977\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-06 19:58:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.35\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:57:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:57:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.35,StartTime:2020-05-06 19:57:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 19:58:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://59d0fcfc647e6b7ecb7fb65a810e2fd628c48f6b0c271195985fb5bd5ef89608,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.35,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.270: INFO: Pod "webserver-deployment-84855cf797-492q9" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-492q9 webserver-deployment-84855cf797- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-84855cf797-492q9 52db1562-bdc8-4aa9-ba06-ac787d9f12de 2082287 0 2020-05-06 19:58:10 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 0xc0040d2567 0xc0040d2568}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06950e98-c412-476c-bbcc-81d30b775977\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.270: INFO: Pod "webserver-deployment-84855cf797-4kz4b" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-4kz4b webserver-deployment-84855cf797- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-84855cf797-4kz4b 168d75d3-0342-4567-8f3f-c45911c3e6df 2082124 0 2020-05-06 19:57:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 0xc0040d2697 0xc0040d2698}] [] [{kube-controller-manager Update v1 2020-05-06 19:57:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06950e98-c412-476c-bbcc-81d30b775977\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-06 19:57:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.33\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:57:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:57:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:57:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:57:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.33,StartTime:2020-05-06 19:57:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 19:57:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://291e8af7509f1fe30a164d9219dfc773408fb4e633de95db73211e758d1e571c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.33,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.271: INFO: Pod "webserver-deployment-84855cf797-4nmp2" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-4nmp2 webserver-deployment-84855cf797- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-84855cf797-4nmp2 7250742c-0ff5-45a3-9e59-fc45e1cb0510 2082141 0 2020-05-06 19:57:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 0xc0040d2857 0xc0040d2858}] [] [{kube-controller-manager Update v1 2020-05-06 19:57:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06950e98-c412-476c-bbcc-81d30b775977\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-06 19:58:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.34\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:57:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:57:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.34,StartTime:2020-05-06 19:57:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 19:58:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e0071d270b80e114af110858694a1e2a9c7b1b60450c05887e3fc69e5d2c2515,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.34,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.271: INFO: Pod "webserver-deployment-84855cf797-7khcs" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-7khcs webserver-deployment-84855cf797- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-84855cf797-7khcs fab493de-2070-4db2-b635-dca94396f305 2082163 0 2020-05-06 19:57:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 0xc0040d2a07 0xc0040d2a08}] [] [{kube-controller-manager Update v1 2020-05-06 19:57:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06950e98-c412-476c-bbcc-81d30b775977\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-06 19:58:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.36\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:57:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:57:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.36,StartTime:2020-05-06 19:57:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 19:58:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://55d9ea879bc0d24b29ec9413dc13d0772b7b880d141fa2ff89651b0b0b8934b9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.36,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.271: INFO: Pod "webserver-deployment-84855cf797-7lvzd" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-7lvzd webserver-deployment-84855cf797- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-84855cf797-7lvzd 3e05157c-6292-47f9-b19d-c212077dcc92 2082183 0 2020-05-06 19:57:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 0xc0040d2bb7 0xc0040d2bb8}] [] [{kube-controller-manager Update v1 2020-05-06 19:57:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06950e98-c412-476c-bbcc-81d30b775977\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-06 19:58:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.125\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:57:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:57:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.125,StartTime:2020-05-06 19:57:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 19:58:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6d831a468e807f697862214f79628edead18dcf7d8bc1b51c80a958daa8f7355,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.125,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.271: INFO: Pod "webserver-deployment-84855cf797-7q9gx" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-7q9gx webserver-deployment-84855cf797- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-84855cf797-7q9gx 9571a1a6-2deb-4904-ba21-76093ac93c62 2082309 0 2020-05-06 19:58:11 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 0xc0040d2d67 0xc0040d2d68}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06950e98-c412-476c-bbcc-81d30b775977\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.271: INFO: Pod "webserver-deployment-84855cf797-f9q45" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-f9q45 webserver-deployment-84855cf797- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-84855cf797-f9q45 73f6bab0-e443-4ce2-ae35-3397d0d4e869 2082318 0 2020-05-06 19:58:11 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 0xc0040d2e97 0xc0040d2e98}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06950e98-c412-476c-bbcc-81d30b775977\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.272: INFO: Pod "webserver-deployment-84855cf797-gd5n4" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-gd5n4 webserver-deployment-84855cf797- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-84855cf797-gd5n4 00331e64-8bbb-498d-b91d-2b25328f8c5c 2082272 0 2020-05-06 19:58:10 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 0xc0040d2fc7 0xc0040d2fc8}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06950e98-c412-476c-bbcc-81d30b775977\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.272: INFO: Pod "webserver-deployment-84855cf797-hlt8n" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-hlt8n webserver-deployment-84855cf797- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-84855cf797-hlt8n 3242e884-fa36-4e01-80cd-dd4559f829f0 2082298 0 2020-05-06 19:58:10 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 0xc0040d30f7 0xc0040d30f8}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06950e98-c412-476c-bbcc-81d30b775977\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.272: INFO: Pod "webserver-deployment-84855cf797-jzj6n" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-jzj6n webserver-deployment-84855cf797- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-84855cf797-jzj6n b0fa4dc6-d097-459c-aa7c-57ea17e1d6dd 2082110 0 2020-05-06 19:57:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 0xc0040d3227 0xc0040d3228}] [] [{kube-controller-manager Update v1 2020-05-06 19:57:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06950e98-c412-476c-bbcc-81d30b775977\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-06 19:57:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.32\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:57:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:57:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:57:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:57:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.32,StartTime:2020-05-06 19:57:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 19:57:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bf8219e5603bee847388bb6a81d07babf154da7fe2bc4ee53fd306068ff9f0d3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.32,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.272: INFO: Pod "webserver-deployment-84855cf797-kzfm6" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-kzfm6 webserver-deployment-84855cf797- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-84855cf797-kzfm6 6116072d-2d1e-4c5f-b1e0-870d513d92b9 2082320 0 2020-05-06 19:58:10 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 0xc0040d33d7 0xc0040d33d8}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06950e98-c412-476c-bbcc-81d30b775977\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-06 19:58:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-06 19:58:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.273: INFO: Pod "webserver-deployment-84855cf797-lsmxm" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-lsmxm webserver-deployment-84855cf797- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-84855cf797-lsmxm 5f65f40d-1578-418e-88d4-81c272640968 2082314 0 2020-05-06 19:58:11 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 0xc0040d3567 0xc0040d3568}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06950e98-c412-476c-bbcc-81d30b775977\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.273: INFO: Pod "webserver-deployment-84855cf797-nknmw" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-nknmw webserver-deployment-84855cf797- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-84855cf797-nknmw 864b4a42-1947-4f7c-bb8c-7b9e7692e443 2082291 0 2020-05-06 19:58:10 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 0xc0040d3697 0xc0040d3698}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06950e98-c412-476c-bbcc-81d30b775977\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.273: INFO: Pod "webserver-deployment-84855cf797-ntgj2" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-ntgj2 webserver-deployment-84855cf797- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-84855cf797-ntgj2 97afce11-c77c-4e3d-b295-35c872015fa0 2082319 0 2020-05-06 19:58:11 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 0xc0040d37c7 0xc0040d37c8}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06950e98-c412-476c-bbcc-81d30b775977\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.273: INFO: Pod "webserver-deployment-84855cf797-r6657" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-r6657 webserver-deployment-84855cf797- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-84855cf797-r6657 06ae6b83-22e3-4e89-b70d-d68d229a35d5 2082337 0 2020-05-06 19:58:10 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 0xc0040d38f7 0xc0040d38f8}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06950e98-c412-476c-bbcc-81d30b775977\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-06 19:58:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-06 19:58:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.274: INFO: Pod "webserver-deployment-84855cf797-tgmr2" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-tgmr2 webserver-deployment-84855cf797- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-84855cf797-tgmr2 c03f3ccc-b3d8-4dac-b363-5ccff047db17 2082284 0 2020-05-06 19:58:10 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 0xc0040d3a87 0xc0040d3a88}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06950e98-c412-476c-bbcc-81d30b775977\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.274: INFO: Pod "webserver-deployment-84855cf797-thtbb" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-thtbb webserver-deployment-84855cf797- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-84855cf797-thtbb 194fd4de-032e-4cee-9b9f-c9450c7a10bc 2082136 0 2020-05-06 19:57:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 0xc0040d3bb7 0xc0040d3bb8}] [] [{kube-controller-manager Update v1 2020-05-06 19:57:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06950e98-c412-476c-bbcc-81d30b775977\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-06 19:58:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.124\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:57:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:57:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.124,StartTime:2020-05-06 19:57:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 19:57:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3281413ae7d40ae98d857053551c35e3a2c9d2106106a9b853b9f1c9c545dbd5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.124,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.274: INFO: Pod "webserver-deployment-84855cf797-vjbv9" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-vjbv9 webserver-deployment-84855cf797- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-84855cf797-vjbv9 c24ab521-86af-4548-ad0b-6907a75cb037 2082166 0 2020-05-06 19:57:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 0xc0040d3d67 0xc0040d3d68}] [] [{kube-controller-manager Update v1 2020-05-06 19:57:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06950e98-c412-476c-bbcc-81d30b775977\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-06 19:58:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.126\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:57:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:57:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.126,StartTime:2020-05-06 19:57:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 19:58:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2dd04aaf6a27855f82e344a2e0f6511008b1c4d0570b3a198f8e40e7099a3cf4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.126,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 19:58:12.274: INFO: Pod "webserver-deployment-84855cf797-wlpvl" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-wlpvl webserver-deployment-84855cf797- deployment-9548 /api/v1/namespaces/deployment-9548/pods/webserver-deployment-84855cf797-wlpvl 72678bd0-63ff-4e69-a7cc-3cf5fd1a3e48 2082317 0 2020-05-06 19:58:11 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 06950e98-c412-476c-bbcc-81d30b775977 0xc0040d3f17 0xc0040d3f18}] [] [{kube-controller-manager Update v1 2020-05-06 19:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06950e98-c412-476c-bbcc-81d30b775977\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5vhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5vhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 19:58:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:58:12.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9548" for this suite. • [SLOW TEST:24.400 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":288,"completed":37,"skipped":602,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:58:12.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:58:16.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4678" for this suite. • [SLOW TEST:5.514 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":288,"completed":38,"skipped":648,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:58:18.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:58:38.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8920" for this suite. • [SLOW TEST:20.465 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":288,"completed":39,"skipped":687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:58:38.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 6 19:58:47.807: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:58:47.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8745" for this suite. • [SLOW TEST:9.406 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":40,"skipped":725,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:58:47.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 6 19:58:48.079: INFO: Waiting up to 5m0s for pod "downward-api-89f26c72-ce4d-4f29-b042-691396700e60" in namespace "downward-api-3331" to be "Succeeded or Failed" May 6 19:58:48.376: INFO: Pod "downward-api-89f26c72-ce4d-4f29-b042-691396700e60": Phase="Pending", Reason="", readiness=false. Elapsed: 297.008452ms May 6 19:58:50.381: INFO: Pod "downward-api-89f26c72-ce4d-4f29-b042-691396700e60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.301548826s May 6 19:58:52.474: INFO: Pod "downward-api-89f26c72-ce4d-4f29-b042-691396700e60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.394210372s May 6 19:58:54.486: INFO: Pod "downward-api-89f26c72-ce4d-4f29-b042-691396700e60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.406405226s STEP: Saw pod success May 6 19:58:54.486: INFO: Pod "downward-api-89f26c72-ce4d-4f29-b042-691396700e60" satisfied condition "Succeeded or Failed" May 6 19:58:54.488: INFO: Trying to get logs from node latest-worker pod downward-api-89f26c72-ce4d-4f29-b042-691396700e60 container dapi-container: STEP: delete the pod May 6 19:58:54.543: INFO: Waiting for pod downward-api-89f26c72-ce4d-4f29-b042-691396700e60 to disappear May 6 19:58:54.568: INFO: Pod downward-api-89f26c72-ce4d-4f29-b042-691396700e60 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:58:54.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3331" for this suite. • [SLOW TEST:6.635 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":288,"completed":41,"skipped":757,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:58:54.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 19:58:55.838: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 19:58:57.924: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391935, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391935, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391936, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391935, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 19:59:01.035: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:59:01.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2421" for this suite. STEP: Destroying namespace "webhook-2421-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.407 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":288,"completed":42,"skipped":759,"failed":0} [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:59:01.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 6 19:59:02.486: INFO: Waiting up to 5m0s for pod "pod-852a9904-0407-467d-b9bc-8867dfbbaded" in namespace "emptydir-7413" to be "Succeeded or Failed" May 6 19:59:02.490: INFO: Pod "pod-852a9904-0407-467d-b9bc-8867dfbbaded": Phase="Pending", Reason="", readiness=false. Elapsed: 4.892531ms May 6 19:59:04.780: INFO: Pod "pod-852a9904-0407-467d-b9bc-8867dfbbaded": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294023835s May 6 19:59:06.791: INFO: Pod "pod-852a9904-0407-467d-b9bc-8867dfbbaded": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305168686s May 6 19:59:08.821: INFO: Pod "pod-852a9904-0407-467d-b9bc-8867dfbbaded": Phase="Running", Reason="", readiness=true. Elapsed: 6.335191451s May 6 19:59:10.824: INFO: Pod "pod-852a9904-0407-467d-b9bc-8867dfbbaded": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.338181278s STEP: Saw pod success May 6 19:59:10.824: INFO: Pod "pod-852a9904-0407-467d-b9bc-8867dfbbaded" satisfied condition "Succeeded or Failed" May 6 19:59:10.826: INFO: Trying to get logs from node latest-worker pod pod-852a9904-0407-467d-b9bc-8867dfbbaded container test-container: STEP: delete the pod May 6 19:59:10.886: INFO: Waiting for pod pod-852a9904-0407-467d-b9bc-8867dfbbaded to disappear May 6 19:59:10.982: INFO: Pod pod-852a9904-0407-467d-b9bc-8867dfbbaded no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:59:10.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7413" for this suite. • [SLOW TEST:9.017 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":43,"skipped":759,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:59:11.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 6 19:59:16.254: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:59:16.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2695" for this suite. • [SLOW TEST:5.406 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":44,"skipped":777,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:59:16.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 19:59:17.152: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 19:59:19.283: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391957, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391957, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391957, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391957, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 19:59:21.287: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391957, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391957, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391957, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724391957, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 19:59:24.618: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 19:59:24.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:59:25.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-906" for this suite. STEP: Destroying namespace "webhook-906-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.883 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":288,"completed":45,"skipped":786,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:59:26.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 19:59:27.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' May 6 19:59:27.975: INFO: stderr: "" May 6 19:59:27.975: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.2.298+0bcbe384d866b9\", GitCommit:\"0bcbe384d866b9cf4b51d0a2905befc538e99db7\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T18:23:02Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:59:27.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7312" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":288,"completed":46,"skipped":790,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:59:27.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 6 19:59:29.367: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05282aec-7280-4e42-9115-e6b071f8be34" in namespace "projected-6024" to be "Succeeded or Failed" May 6 19:59:29.414: INFO: Pod "downwardapi-volume-05282aec-7280-4e42-9115-e6b071f8be34": Phase="Pending", Reason="", readiness=false. Elapsed: 47.721043ms May 6 19:59:31.589: INFO: Pod "downwardapi-volume-05282aec-7280-4e42-9115-e6b071f8be34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221915775s May 6 19:59:33.702: INFO: Pod "downwardapi-volume-05282aec-7280-4e42-9115-e6b071f8be34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335812794s May 6 19:59:35.706: INFO: Pod "downwardapi-volume-05282aec-7280-4e42-9115-e6b071f8be34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.339724385s STEP: Saw pod success May 6 19:59:35.706: INFO: Pod "downwardapi-volume-05282aec-7280-4e42-9115-e6b071f8be34" satisfied condition "Succeeded or Failed" May 6 19:59:35.709: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-05282aec-7280-4e42-9115-e6b071f8be34 container client-container: STEP: delete the pod May 6 19:59:35.986: INFO: Waiting for pod downwardapi-volume-05282aec-7280-4e42-9115-e6b071f8be34 to disappear May 6 19:59:36.006: INFO: Pod downwardapi-volume-05282aec-7280-4e42-9115-e6b071f8be34 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:59:36.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6024" for this suite. • [SLOW TEST:8.256 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":47,"skipped":796,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:59:36.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 6 19:59:37.015: INFO: Waiting up to 5m0s for pod "pod-677c7ee7-2c72-4aea-b38c-c23cff89c2be" in namespace "emptydir-6380" to be "Succeeded or Failed" May 6 19:59:37.217: INFO: Pod "pod-677c7ee7-2c72-4aea-b38c-c23cff89c2be": Phase="Pending", Reason="", readiness=false. Elapsed: 201.459389ms May 6 19:59:39.220: INFO: Pod "pod-677c7ee7-2c72-4aea-b38c-c23cff89c2be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204277705s May 6 19:59:41.300: INFO: Pod "pod-677c7ee7-2c72-4aea-b38c-c23cff89c2be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.285077186s May 6 19:59:43.588: INFO: Pod "pod-677c7ee7-2c72-4aea-b38c-c23cff89c2be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.572757063s May 6 19:59:45.592: INFO: Pod "pod-677c7ee7-2c72-4aea-b38c-c23cff89c2be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.576834953s STEP: Saw pod success May 6 19:59:45.592: INFO: Pod "pod-677c7ee7-2c72-4aea-b38c-c23cff89c2be" satisfied condition "Succeeded or Failed" May 6 19:59:45.595: INFO: Trying to get logs from node latest-worker pod pod-677c7ee7-2c72-4aea-b38c-c23cff89c2be container test-container: STEP: delete the pod May 6 19:59:46.218: INFO: Waiting for pod pod-677c7ee7-2c72-4aea-b38c-c23cff89c2be to disappear May 6 19:59:46.278: INFO: Pod pod-677c7ee7-2c72-4aea-b38c-c23cff89c2be no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 19:59:46.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6380" for this suite. • [SLOW TEST:10.240 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":48,"skipped":800,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 19:59:46.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-bd3c0d3c-d0a5-460f-b5e1-db61343af0a9 in namespace container-probe-957 May 6 19:59:53.131: INFO: Started pod test-webserver-bd3c0d3c-d0a5-460f-b5e1-db61343af0a9 in namespace container-probe-957 STEP: checking the pod's current state and verifying that restartCount is present May 6 19:59:53.134: INFO: Initial restart count of pod test-webserver-bd3c0d3c-d0a5-460f-b5e1-db61343af0a9 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:03:55.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-957" for this suite. • [SLOW TEST:248.855 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":49,"skipped":821,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:03:55.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 20:03:55.950: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 6 20:03:55.976: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:03:55.997: INFO: Number of nodes with available pods: 0 May 6 20:03:55.997: INFO: Node latest-worker is running more than one daemon pod May 6 20:03:57.184: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:03:57.187: INFO: Number of nodes with available pods: 0 May 6 20:03:57.187: INFO: Node latest-worker is running more than one daemon pod May 6 20:03:58.055: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:03:58.059: INFO: Number of nodes with available pods: 0 May 6 20:03:58.059: INFO: Node latest-worker is running more than one daemon pod May 6 20:03:59.002: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:03:59.746: INFO: Number of nodes with available pods: 0 May 6 20:03:59.746: INFO: Node latest-worker is running more than one daemon pod May 6 20:04:00.083: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:00.087: INFO: Number of nodes with available pods: 0 May 6 20:04:00.087: INFO: Node latest-worker is running more than one daemon pod May 6 20:04:01.132: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:01.403: INFO: Number of nodes with available pods: 0 May 6 20:04:01.403: INFO: Node latest-worker is running more than one daemon pod May 6 20:04:02.215: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:02.219: INFO: Number of nodes with available pods: 1 May 6 20:04:02.219: INFO: Node latest-worker is running more than one daemon pod May 6 20:04:03.328: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:03.331: INFO: Number of nodes with available pods: 1 May 6 20:04:03.331: INFO: Node latest-worker is running more than one daemon pod May 6 20:04:04.003: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:04.007: INFO: Number of nodes with available pods: 2 May 6 20:04:04.007: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 6 20:04:04.158: INFO: Wrong image for pod: daemon-set-j7m6s. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:04.158: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:04.214: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:05.218: INFO: Wrong image for pod: daemon-set-j7m6s. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:05.218: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:05.221: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:06.268: INFO: Wrong image for pod: daemon-set-j7m6s. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:06.268: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:06.274: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:07.351: INFO: Wrong image for pod: daemon-set-j7m6s. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:07.351: INFO: Pod daemon-set-j7m6s is not available May 6 20:04:07.351: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:07.938: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:08.363: INFO: Wrong image for pod: daemon-set-j7m6s. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:08.364: INFO: Pod daemon-set-j7m6s is not available May 6 20:04:08.364: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:08.367: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:09.351: INFO: Wrong image for pod: daemon-set-j7m6s. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:09.351: INFO: Pod daemon-set-j7m6s is not available May 6 20:04:09.351: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:09.355: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:10.616: INFO: Wrong image for pod: daemon-set-j7m6s. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:10.616: INFO: Pod daemon-set-j7m6s is not available May 6 20:04:10.616: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:10.621: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:11.324: INFO: Wrong image for pod: daemon-set-j7m6s. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:11.324: INFO: Pod daemon-set-j7m6s is not available May 6 20:04:11.324: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:11.327: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:12.352: INFO: Wrong image for pod: daemon-set-j7m6s. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:12.352: INFO: Pod daemon-set-j7m6s is not available May 6 20:04:12.352: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:12.362: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:13.826: INFO: Wrong image for pod: daemon-set-j7m6s. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:13.826: INFO: Pod daemon-set-j7m6s is not available May 6 20:04:13.826: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:13.829: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:14.382: INFO: Wrong image for pod: daemon-set-j7m6s. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:14.382: INFO: Pod daemon-set-j7m6s is not available May 6 20:04:14.382: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:14.574: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:15.940: INFO: Pod daemon-set-6kqpl is not available May 6 20:04:15.940: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:17.024: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:17.525: INFO: Pod daemon-set-6kqpl is not available May 6 20:04:17.525: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:17.527: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:18.442: INFO: Pod daemon-set-6kqpl is not available May 6 20:04:18.442: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:19.365: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:20.532: INFO: Pod daemon-set-6kqpl is not available May 6 20:04:20.532: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:20.536: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:21.219: INFO: Pod daemon-set-6kqpl is not available May 6 20:04:21.219: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:21.223: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:22.219: INFO: Pod daemon-set-6kqpl is not available May 6 20:04:22.219: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:22.223: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:23.615: INFO: Pod daemon-set-6kqpl is not available May 6 20:04:23.615: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:23.619: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:24.409: INFO: Pod daemon-set-6kqpl is not available May 6 20:04:24.409: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:24.610: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:25.603: INFO: Pod daemon-set-6kqpl is not available May 6 20:04:25.603: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:25.678: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:26.218: INFO: Pod daemon-set-6kqpl is not available May 6 20:04:26.218: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:26.223: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:27.218: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:27.222: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:28.220: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:28.224: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:29.285: INFO: Wrong image for pod: daemon-set-z6xw2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 6 20:04:29.285: INFO: Pod daemon-set-z6xw2 is not available May 6 20:04:29.289: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:30.987: INFO: Pod daemon-set-hftbj is not available May 6 20:04:31.075: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:31.304: INFO: Pod daemon-set-hftbj is not available May 6 20:04:31.309: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 6 20:04:31.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:31.555: INFO: Number of nodes with available pods: 1 May 6 20:04:31.555: INFO: Node latest-worker2 is running more than one daemon pod May 6 20:04:32.560: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:32.565: INFO: Number of nodes with available pods: 1 May 6 20:04:32.565: INFO: Node latest-worker2 is running more than one daemon pod May 6 20:04:33.640: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:33.643: INFO: Number of nodes with available pods: 1 May 6 20:04:33.643: INFO: Node latest-worker2 is running more than one daemon pod May 6 20:04:35.344: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:35.812: INFO: Number of nodes with available pods: 1 May 6 20:04:35.812: INFO: Node latest-worker2 is running more than one daemon pod May 6 20:04:37.303: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:37.543: INFO: Number of nodes with available pods: 1 May 6 20:04:37.543: INFO: Node latest-worker2 is running more than one daemon pod May 6 20:04:37.627: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:37.783: INFO: Number of nodes with available pods: 1 May 6 20:04:37.783: INFO: Node latest-worker2 is running more than one daemon pod May 6 20:04:38.561: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:38.564: INFO: Number of nodes with available pods: 1 May 6 20:04:38.564: INFO: Node latest-worker2 is running more than one daemon pod May 6 20:04:39.570: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:04:39.572: INFO: Number of nodes with available pods: 2 May 6 20:04:39.572: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7470, will wait for the garbage collector to delete the pods May 6 20:04:41.411: INFO: Deleting DaemonSet.extensions daemon-set took: 6.183713ms May 6 20:04:42.511: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.100230346s May 6 20:04:56.764: INFO: Number of nodes with available pods: 0 May 6 20:04:56.764: INFO: Number of running nodes: 0, number of available pods: 0 May 6 20:04:56.767: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7470/daemonsets","resourceVersion":"2084068"},"items":null} May 6 20:04:56.771: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7470/pods","resourceVersion":"2084068"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:04:57.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7470" for this suite. • [SLOW TEST:61.712 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":288,"completed":50,"skipped":830,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:04:57.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:05:57.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1318" for this suite. • [SLOW TEST:60.253 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":288,"completed":51,"skipped":896,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:05:57.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-8q5zn in namespace proxy-8632 I0506 20:05:58.742438 7 runners.go:190] Created replication controller with name: proxy-service-8q5zn, namespace: proxy-8632, replica count: 1 I0506 20:05:59.792842 7 runners.go:190] proxy-service-8q5zn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 20:06:00.793039 7 runners.go:190] proxy-service-8q5zn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 20:06:01.793357 7 runners.go:190] proxy-service-8q5zn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 20:06:02.793557 7 runners.go:190] proxy-service-8q5zn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 20:06:03.793763 7 runners.go:190] proxy-service-8q5zn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 20:06:04.793970 7 runners.go:190] proxy-service-8q5zn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 20:06:05.794191 7 runners.go:190] proxy-service-8q5zn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 20:06:06.794433 7 runners.go:190] proxy-service-8q5zn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 20:06:07.794674 7 runners.go:190] proxy-service-8q5zn Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 20:06:07.797: INFO: setup took 9.561962907s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 6 20:06:07.803: INFO: (0) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname2/proxy/: bar (200; 6.391584ms) May 6 20:06:07.804: INFO: (0) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd/proxy/: test (200; 6.392384ms) May 6 20:06:07.805: INFO: (0) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 7.650695ms) May 6 20:06:07.805: INFO: (0) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 7.647772ms) May 6 20:06:07.805: INFO: (0) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname1/proxy/: foo (200; 7.761458ms) May 6 20:06:07.805: INFO: (0) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 7.758025ms) May 6 20:06:07.807: INFO: (0) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 9.99166ms) May 6 20:06:07.807: INFO: (0) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname1/proxy/: foo (200; 10.167256ms) May 6 20:06:07.807: INFO: (0) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:1080/proxy/: test<... (200; 10.208891ms) May 6 20:06:07.808: INFO: (0) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:1080/proxy/: ... (200; 11.098915ms) May 6 20:06:07.812: INFO: (0) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname2/proxy/: bar (200; 14.864541ms) May 6 20:06:07.813: INFO: (0) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname2/proxy/: tls qux (200; 15.619386ms) May 6 20:06:07.813: INFO: (0) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:460/proxy/: tls baz (200; 15.579871ms) May 6 20:06:07.813: INFO: (0) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname1/proxy/: tls baz (200; 15.551329ms) May 6 20:06:07.813: INFO: (0) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:462/proxy/: tls qux (200; 15.627016ms) May 6 20:06:07.813: INFO: (0) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:443/proxy/: test<... (200; 7.998689ms) May 6 20:06:07.822: INFO: (1) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd/proxy/: test (200; 8.039074ms) May 6 20:06:07.822: INFO: (1) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 8.582084ms) May 6 20:06:07.823: INFO: (1) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname2/proxy/: bar (200; 9.17194ms) May 6 20:06:07.823: INFO: (1) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname2/proxy/: bar (200; 9.510898ms) May 6 20:06:07.823: INFO: (1) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname1/proxy/: foo (200; 9.443941ms) May 6 20:06:07.823: INFO: (1) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:1080/proxy/: ... (200; 9.526665ms) May 6 20:06:07.823: INFO: (1) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname1/proxy/: tls baz (200; 9.489447ms) May 6 20:06:07.823: INFO: (1) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 9.534502ms) May 6 20:06:07.823: INFO: (1) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname1/proxy/: foo (200; 9.554145ms) May 6 20:06:07.823: INFO: (1) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:462/proxy/: tls qux (200; 9.54634ms) May 6 20:06:07.823: INFO: (1) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 9.564049ms) May 6 20:06:07.823: INFO: (1) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname2/proxy/: tls qux (200; 9.629989ms) May 6 20:06:07.823: INFO: (1) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:443/proxy/: test<... (200; 3.287036ms) May 6 20:06:07.827: INFO: (2) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 3.821541ms) May 6 20:06:07.827: INFO: (2) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 3.83646ms) May 6 20:06:07.827: INFO: (2) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:460/proxy/: tls baz (200; 3.910975ms) May 6 20:06:07.827: INFO: (2) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:462/proxy/: tls qux (200; 4.011523ms) May 6 20:06:07.828: INFO: (2) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 4.100974ms) May 6 20:06:07.828: INFO: (2) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 4.072714ms) May 6 20:06:07.828: INFO: (2) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd/proxy/: test (200; 4.074155ms) May 6 20:06:07.828: INFO: (2) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:443/proxy/: ... (200; 4.643673ms) May 6 20:06:07.829: INFO: (2) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname2/proxy/: tls qux (200; 5.985035ms) May 6 20:06:07.830: INFO: (2) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname2/proxy/: bar (200; 6.150543ms) May 6 20:06:07.830: INFO: (2) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname1/proxy/: tls baz (200; 6.269368ms) May 6 20:06:07.830: INFO: (2) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname1/proxy/: foo (200; 6.274108ms) May 6 20:06:07.830: INFO: (2) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname2/proxy/: bar (200; 6.321772ms) May 6 20:06:07.830: INFO: (2) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname1/proxy/: foo (200; 6.341211ms) May 6 20:06:07.833: INFO: (3) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 2.929522ms) May 6 20:06:07.833: INFO: (3) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:1080/proxy/: test<... (200; 3.427872ms) May 6 20:06:07.834: INFO: (3) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd/proxy/: test (200; 4.262295ms) May 6 20:06:07.834: INFO: (3) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:1080/proxy/: ... (200; 4.359961ms) May 6 20:06:07.834: INFO: (3) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:460/proxy/: tls baz (200; 4.392088ms) May 6 20:06:07.834: INFO: (3) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname2/proxy/: bar (200; 4.430882ms) May 6 20:06:07.834: INFO: (3) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:443/proxy/: test (200; 6.291609ms) May 6 20:06:07.841: INFO: (4) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 6.248132ms) May 6 20:06:07.841: INFO: (4) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 6.368378ms) May 6 20:06:07.841: INFO: (4) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 6.450512ms) May 6 20:06:07.841: INFO: (4) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:1080/proxy/: ... (200; 6.575246ms) May 6 20:06:07.841: INFO: (4) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname2/proxy/: tls qux (200; 6.60117ms) May 6 20:06:07.842: INFO: (4) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:1080/proxy/: test<... (200; 6.729035ms) May 6 20:06:07.842: INFO: (4) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:462/proxy/: tls qux (200; 6.753231ms) May 6 20:06:07.842: INFO: (4) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:460/proxy/: tls baz (200; 6.740883ms) May 6 20:06:07.842: INFO: (4) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname2/proxy/: bar (200; 6.838691ms) May 6 20:06:07.842: INFO: (4) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname1/proxy/: tls baz (200; 7.047952ms) May 6 20:06:07.842: INFO: (4) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname1/proxy/: foo (200; 7.048831ms) May 6 20:06:07.842: INFO: (4) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:443/proxy/: test (200; 25.834759ms) May 6 20:06:07.868: INFO: (5) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 26.115005ms) May 6 20:06:07.870: INFO: (5) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:1080/proxy/: ... (200; 28.270056ms) May 6 20:06:07.871: INFO: (5) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname1/proxy/: foo (200; 28.397772ms) May 6 20:06:07.871: INFO: (5) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname1/proxy/: foo (200; 28.544753ms) May 6 20:06:07.871: INFO: (5) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname2/proxy/: bar (200; 28.329742ms) May 6 20:06:07.871: INFO: (5) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:1080/proxy/: test<... (200; 28.405239ms) May 6 20:06:07.871: INFO: (5) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname2/proxy/: tls qux (200; 28.369125ms) May 6 20:06:07.871: INFO: (5) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname2/proxy/: bar (200; 28.428668ms) May 6 20:06:07.871: INFO: (5) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname1/proxy/: tls baz (200; 28.705077ms) May 6 20:06:07.871: INFO: (5) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 28.623957ms) May 6 20:06:07.871: INFO: (5) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:462/proxy/: tls qux (200; 28.793661ms) May 6 20:06:07.871: INFO: (5) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 28.702447ms) May 6 20:06:07.874: INFO: (6) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 3.036536ms) May 6 20:06:07.874: INFO: (6) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:1080/proxy/: ... (200; 3.188685ms) May 6 20:06:07.874: INFO: (6) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 3.49273ms) May 6 20:06:07.876: INFO: (6) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname1/proxy/: foo (200; 5.228354ms) May 6 20:06:07.876: INFO: (6) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:1080/proxy/: test<... (200; 5.214991ms) May 6 20:06:07.876: INFO: (6) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:443/proxy/: test (200; 6.638821ms) May 6 20:06:07.882: INFO: (7) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname1/proxy/: foo (200; 4.086726ms) May 6 20:06:07.882: INFO: (7) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname2/proxy/: bar (200; 4.46229ms) May 6 20:06:07.882: INFO: (7) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname1/proxy/: foo (200; 4.394371ms) May 6 20:06:07.882: INFO: (7) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname2/proxy/: bar (200; 4.3999ms) May 6 20:06:07.882: INFO: (7) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname2/proxy/: tls qux (200; 4.393346ms) May 6 20:06:07.882: INFO: (7) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd/proxy/: test (200; 4.646109ms) May 6 20:06:07.882: INFO: (7) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 4.591214ms) May 6 20:06:07.883: INFO: (7) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:1080/proxy/: ... (200; 4.857916ms) May 6 20:06:07.883: INFO: (7) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 4.985415ms) May 6 20:06:07.883: INFO: (7) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 5.287167ms) May 6 20:06:07.883: INFO: (7) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:443/proxy/: test<... (200; 5.387786ms) May 6 20:06:07.883: INFO: (7) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:460/proxy/: tls baz (200; 5.403186ms) May 6 20:06:07.887: INFO: (8) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd/proxy/: test (200; 4.082571ms) May 6 20:06:07.887: INFO: (8) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 4.04075ms) May 6 20:06:07.888: INFO: (8) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:1080/proxy/: test<... (200; 4.219752ms) May 6 20:06:07.888: INFO: (8) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 4.203113ms) May 6 20:06:07.888: INFO: (8) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:462/proxy/: tls qux (200; 4.219218ms) May 6 20:06:07.888: INFO: (8) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 4.264413ms) May 6 20:06:07.888: INFO: (8) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:1080/proxy/: ... (200; 4.269235ms) May 6 20:06:07.888: INFO: (8) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:460/proxy/: tls baz (200; 4.333448ms) May 6 20:06:07.888: INFO: (8) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:443/proxy/: ... (200; 4.643502ms) May 6 20:06:07.894: INFO: (9) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:1080/proxy/: test<... (200; 4.608857ms) May 6 20:06:07.894: INFO: (9) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:443/proxy/: test (200; 4.577156ms) May 6 20:06:07.896: INFO: (9) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname1/proxy/: foo (200; 6.466321ms) May 6 20:06:07.896: INFO: (9) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname1/proxy/: tls baz (200; 6.569313ms) May 6 20:06:07.896: INFO: (9) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname2/proxy/: bar (200; 6.675602ms) May 6 20:06:07.896: INFO: (9) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname2/proxy/: bar (200; 6.7348ms) May 6 20:06:07.897: INFO: (9) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname1/proxy/: foo (200; 6.879873ms) May 6 20:06:07.897: INFO: (9) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname2/proxy/: tls qux (200; 7.123052ms) May 6 20:06:07.902: INFO: (10) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname1/proxy/: foo (200; 4.867767ms) May 6 20:06:07.902: INFO: (10) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname1/proxy/: foo (200; 5.431963ms) May 6 20:06:07.902: INFO: (10) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname2/proxy/: tls qux (200; 5.531549ms) May 6 20:06:07.902: INFO: (10) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname2/proxy/: bar (200; 5.474035ms) May 6 20:06:07.902: INFO: (10) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname2/proxy/: bar (200; 5.459976ms) May 6 20:06:07.902: INFO: (10) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname1/proxy/: tls baz (200; 5.573228ms) May 6 20:06:07.903: INFO: (10) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:1080/proxy/: test<... (200; 5.646109ms) May 6 20:06:07.903: INFO: (10) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 5.652447ms) May 6 20:06:07.903: INFO: (10) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:462/proxy/: tls qux (200; 5.931755ms) May 6 20:06:07.903: INFO: (10) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:1080/proxy/: ... (200; 6.333249ms) May 6 20:06:07.903: INFO: (10) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 6.365841ms) May 6 20:06:07.903: INFO: (10) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:460/proxy/: tls baz (200; 6.453082ms) May 6 20:06:07.903: INFO: (10) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:443/proxy/: test (200; 6.414097ms) May 6 20:06:07.903: INFO: (10) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 6.42016ms) May 6 20:06:07.903: INFO: (10) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 6.387612ms) May 6 20:06:07.909: INFO: (11) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:1080/proxy/: ... (200; 5.341181ms) May 6 20:06:07.909: INFO: (11) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:1080/proxy/: test<... (200; 5.950729ms) May 6 20:06:07.909: INFO: (11) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname1/proxy/: tls baz (200; 5.971948ms) May 6 20:06:07.909: INFO: (11) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname1/proxy/: foo (200; 5.994463ms) May 6 20:06:07.910: INFO: (11) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:460/proxy/: tls baz (200; 6.095896ms) May 6 20:06:07.910: INFO: (11) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 6.018163ms) May 6 20:06:07.910: INFO: (11) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname2/proxy/: tls qux (200; 6.046372ms) May 6 20:06:07.910: INFO: (11) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname2/proxy/: bar (200; 6.072936ms) May 6 20:06:07.910: INFO: (11) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname1/proxy/: foo (200; 6.065358ms) May 6 20:06:07.910: INFO: (11) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname2/proxy/: bar (200; 6.206068ms) May 6 20:06:07.910: INFO: (11) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:443/proxy/: test (200; 6.884561ms) May 6 20:06:07.914: INFO: (12) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:460/proxy/: tls baz (200; 3.559402ms) May 6 20:06:07.915: INFO: (12) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:1080/proxy/: test<... (200; 4.050874ms) May 6 20:06:07.915: INFO: (12) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname2/proxy/: bar (200; 4.180872ms) May 6 20:06:07.915: INFO: (12) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd/proxy/: test (200; 4.777283ms) May 6 20:06:07.915: INFO: (12) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname2/proxy/: bar (200; 4.851387ms) May 6 20:06:07.915: INFO: (12) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname2/proxy/: tls qux (200; 4.832318ms) May 6 20:06:07.915: INFO: (12) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname1/proxy/: foo (200; 4.922365ms) May 6 20:06:07.916: INFO: (12) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 5.278791ms) May 6 20:06:07.916: INFO: (12) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:1080/proxy/: ... (200; 5.261162ms) May 6 20:06:07.916: INFO: (12) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 5.318127ms) May 6 20:06:07.916: INFO: (12) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:462/proxy/: tls qux (200; 5.343458ms) May 6 20:06:07.916: INFO: (12) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname1/proxy/: foo (200; 5.307333ms) May 6 20:06:07.916: INFO: (12) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:443/proxy/: test (200; 10.066007ms) May 6 20:06:07.926: INFO: (13) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:1080/proxy/: test<... (200; 10.145011ms) May 6 20:06:07.926: INFO: (13) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname2/proxy/: bar (200; 10.088522ms) May 6 20:06:07.926: INFO: (13) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname2/proxy/: bar (200; 10.121713ms) May 6 20:06:07.926: INFO: (13) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:462/proxy/: tls qux (200; 10.341376ms) May 6 20:06:07.926: INFO: (13) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:1080/proxy/: ... (200; 10.364314ms) May 6 20:06:07.926: INFO: (13) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:443/proxy/: ... (200; 4.0101ms) May 6 20:06:07.931: INFO: (14) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd/proxy/: test (200; 4.129631ms) May 6 20:06:07.931: INFO: (14) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 4.112263ms) May 6 20:06:07.931: INFO: (14) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname1/proxy/: tls baz (200; 4.625253ms) May 6 20:06:07.931: INFO: (14) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:460/proxy/: tls baz (200; 4.812738ms) May 6 20:06:07.932: INFO: (14) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname2/proxy/: bar (200; 5.049159ms) May 6 20:06:07.932: INFO: (14) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:443/proxy/: test<... (200; 5.237883ms) May 6 20:06:07.932: INFO: (14) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 5.258678ms) May 6 20:06:07.932: INFO: (14) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname1/proxy/: foo (200; 5.333167ms) May 6 20:06:07.932: INFO: (14) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname1/proxy/: foo (200; 5.469192ms) May 6 20:06:07.932: INFO: (14) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:462/proxy/: tls qux (200; 5.54975ms) May 6 20:06:07.932: INFO: (14) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname2/proxy/: tls qux (200; 5.58735ms) May 6 20:06:07.932: INFO: (14) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname2/proxy/: bar (200; 5.803828ms) May 6 20:06:07.936: INFO: (15) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:1080/proxy/: test<... (200; 3.131183ms) May 6 20:06:07.937: INFO: (15) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:462/proxy/: tls qux (200; 4.420541ms) May 6 20:06:07.937: INFO: (15) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 4.380652ms) May 6 20:06:07.938: INFO: (15) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd/proxy/: test (200; 5.024914ms) May 6 20:06:07.938: INFO: (15) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 5.3161ms) May 6 20:06:07.938: INFO: (15) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:460/proxy/: tls baz (200; 5.693525ms) May 6 20:06:07.938: INFO: (15) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname2/proxy/: bar (200; 5.688879ms) May 6 20:06:07.938: INFO: (15) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 5.692256ms) May 6 20:06:07.938: INFO: (15) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname1/proxy/: foo (200; 5.727551ms) May 6 20:06:07.938: INFO: (15) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname1/proxy/: tls baz (200; 5.718241ms) May 6 20:06:07.938: INFO: (15) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname1/proxy/: foo (200; 5.794584ms) May 6 20:06:07.938: INFO: (15) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:1080/proxy/: ... (200; 5.745361ms) May 6 20:06:07.938: INFO: (15) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname2/proxy/: bar (200; 5.773649ms) May 6 20:06:07.938: INFO: (15) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:443/proxy/: ... (200; 3.781672ms) May 6 20:06:07.942: INFO: (16) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 3.814085ms) May 6 20:06:07.943: INFO: (16) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 4.203276ms) May 6 20:06:07.943: INFO: (16) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname1/proxy/: foo (200; 4.312804ms) May 6 20:06:07.944: INFO: (16) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 5.275432ms) May 6 20:06:07.944: INFO: (16) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname2/proxy/: bar (200; 5.25684ms) May 6 20:06:07.944: INFO: (16) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname2/proxy/: tls qux (200; 5.309417ms) May 6 20:06:07.944: INFO: (16) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname1/proxy/: foo (200; 5.379719ms) May 6 20:06:07.945: INFO: (16) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:462/proxy/: tls qux (200; 6.810337ms) May 6 20:06:07.945: INFO: (16) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:1080/proxy/: test<... (200; 6.883041ms) May 6 20:06:07.945: INFO: (16) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 6.872758ms) May 6 20:06:07.945: INFO: (16) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:443/proxy/: test (200; 10.66247ms) May 6 20:06:07.949: INFO: (16) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname2/proxy/: bar (200; 10.799835ms) May 6 20:06:07.952: INFO: (17) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 2.947961ms) May 6 20:06:07.952: INFO: (17) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 2.869757ms) May 6 20:06:07.952: INFO: (17) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:460/proxy/: tls baz (200; 3.143552ms) May 6 20:06:07.955: INFO: (17) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname2/proxy/: bar (200; 5.630204ms) May 6 20:06:07.955: INFO: (17) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname1/proxy/: foo (200; 5.835812ms) May 6 20:06:07.956: INFO: (17) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname2/proxy/: bar (200; 6.385177ms) May 6 20:06:07.956: INFO: (17) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname2/proxy/: tls qux (200; 6.343994ms) May 6 20:06:07.956: INFO: (17) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 6.650286ms) May 6 20:06:07.956: INFO: (17) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd/proxy/: test (200; 6.708756ms) May 6 20:06:07.956: INFO: (17) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:1080/proxy/: test<... (200; 6.578272ms) May 6 20:06:07.956: INFO: (17) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:462/proxy/: tls qux (200; 6.829733ms) May 6 20:06:07.956: INFO: (17) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:1080/proxy/: ... (200; 6.873752ms) May 6 20:06:07.956: INFO: (17) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:443/proxy/: ... (200; 2.224948ms) May 6 20:06:07.961: INFO: (18) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:460/proxy/: tls baz (200; 3.47791ms) May 6 20:06:07.961: INFO: (18) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 3.443247ms) May 6 20:06:07.961: INFO: (18) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:443/proxy/: test (200; 3.70786ms) May 6 20:06:07.961: INFO: (18) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 3.738398ms) May 6 20:06:07.961: INFO: (18) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:1080/proxy/: test<... (200; 3.793154ms) May 6 20:06:07.962: INFO: (18) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 4.53755ms) May 6 20:06:07.962: INFO: (18) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname2/proxy/: bar (200; 5.207742ms) May 6 20:06:07.962: INFO: (18) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 5.325934ms) May 6 20:06:07.962: INFO: (18) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname2/proxy/: bar (200; 5.377972ms) May 6 20:06:07.962: INFO: (18) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname1/proxy/: tls baz (200; 5.362564ms) May 6 20:06:07.962: INFO: (18) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname1/proxy/: foo (200; 5.397115ms) May 6 20:06:07.963: INFO: (18) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname2/proxy/: tls qux (200; 5.567469ms) May 6 20:06:07.963: INFO: (18) /api/v1/namespaces/proxy-8632/services/http:proxy-service-8q5zn:portname1/proxy/: foo (200; 5.776009ms) May 6 20:06:07.966: INFO: (19) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 2.907824ms) May 6 20:06:07.968: INFO: (19) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:1080/proxy/: test<... (200; 4.404314ms) May 6 20:06:07.968: INFO: (19) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:462/proxy/: tls qux (200; 4.230916ms) May 6 20:06:07.968: INFO: (19) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd:162/proxy/: bar (200; 3.997854ms) May 6 20:06:07.968: INFO: (19) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname1/proxy/: foo (200; 4.222437ms) May 6 20:06:07.968: INFO: (19) /api/v1/namespaces/proxy-8632/pods/proxy-service-8q5zn-94pdd/proxy/: test (200; 4.539709ms) May 6 20:06:07.968: INFO: (19) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:443/proxy/: ... (200; 3.707036ms) May 6 20:06:07.968: INFO: (19) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname1/proxy/: tls baz (200; 4.690901ms) May 6 20:06:07.968: INFO: (19) /api/v1/namespaces/proxy-8632/services/proxy-service-8q5zn:portname2/proxy/: bar (200; 4.467428ms) May 6 20:06:07.968: INFO: (19) /api/v1/namespaces/proxy-8632/pods/http:proxy-service-8q5zn-94pdd:160/proxy/: foo (200; 4.902163ms) May 6 20:06:07.968: INFO: (19) /api/v1/namespaces/proxy-8632/services/https:proxy-service-8q5zn:tlsportname2/proxy/: tls qux (200; 4.165697ms) May 6 20:06:07.968: INFO: (19) /api/v1/namespaces/proxy-8632/pods/https:proxy-service-8q5zn-94pdd:460/proxy/: tls baz (200; 4.806073ms) STEP: deleting ReplicationController proxy-service-8q5zn in namespace proxy-8632, will wait for the garbage collector to delete the pods May 6 20:06:08.025: INFO: Deleting ReplicationController proxy-service-8q5zn took: 4.99076ms May 6 20:06:08.325: INFO: Terminating ReplicationController proxy-service-8q5zn pods took: 300.271342ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:06:15.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8632" for this suite. • [SLOW TEST:18.030 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":288,"completed":52,"skipped":911,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:06:15.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-kctd STEP: Creating a pod to test atomic-volume-subpath May 6 20:06:15.437: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kctd" in namespace "subpath-7399" to be "Succeeded or Failed" May 6 20:06:15.443: INFO: Pod "pod-subpath-test-configmap-kctd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.633484ms May 6 20:06:17.447: INFO: Pod "pod-subpath-test-configmap-kctd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00988454s May 6 20:06:19.451: INFO: Pod "pod-subpath-test-configmap-kctd": Phase="Running", Reason="", readiness=true. Elapsed: 4.013601413s May 6 20:06:21.455: INFO: Pod "pod-subpath-test-configmap-kctd": Phase="Running", Reason="", readiness=true. Elapsed: 6.018230797s May 6 20:06:23.460: INFO: Pod "pod-subpath-test-configmap-kctd": Phase="Running", Reason="", readiness=true. Elapsed: 8.022701271s May 6 20:06:25.464: INFO: Pod "pod-subpath-test-configmap-kctd": Phase="Running", Reason="", readiness=true. Elapsed: 10.02704548s May 6 20:06:27.467: INFO: Pod "pod-subpath-test-configmap-kctd": Phase="Running", Reason="", readiness=true. Elapsed: 12.029923976s May 6 20:06:29.478: INFO: Pod "pod-subpath-test-configmap-kctd": Phase="Running", Reason="", readiness=true. Elapsed: 14.040676659s May 6 20:06:31.496: INFO: Pod "pod-subpath-test-configmap-kctd": Phase="Running", Reason="", readiness=true. Elapsed: 16.05849706s May 6 20:06:33.500: INFO: Pod "pod-subpath-test-configmap-kctd": Phase="Running", Reason="", readiness=true. Elapsed: 18.063087133s May 6 20:06:35.504: INFO: Pod "pod-subpath-test-configmap-kctd": Phase="Running", Reason="", readiness=true. Elapsed: 20.066683059s May 6 20:06:37.688: INFO: Pod "pod-subpath-test-configmap-kctd": Phase="Running", Reason="", readiness=true. Elapsed: 22.250724861s May 6 20:06:39.692: INFO: Pod "pod-subpath-test-configmap-kctd": Phase="Running", Reason="", readiness=true. Elapsed: 24.255189741s May 6 20:06:41.696: INFO: Pod "pod-subpath-test-configmap-kctd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.258713925s STEP: Saw pod success May 6 20:06:41.696: INFO: Pod "pod-subpath-test-configmap-kctd" satisfied condition "Succeeded or Failed" May 6 20:06:41.699: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-kctd container test-container-subpath-configmap-kctd: STEP: delete the pod May 6 20:06:41.916: INFO: Waiting for pod pod-subpath-test-configmap-kctd to disappear May 6 20:06:41.986: INFO: Pod pod-subpath-test-configmap-kctd no longer exists STEP: Deleting pod pod-subpath-test-configmap-kctd May 6 20:06:41.986: INFO: Deleting pod "pod-subpath-test-configmap-kctd" in namespace "subpath-7399" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:06:41.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7399" for this suite. • [SLOW TEST:26.894 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":288,"completed":53,"skipped":941,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:06:42.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 6 20:06:42.534: INFO: Waiting up to 5m0s for pod "downward-api-23a6628b-c362-4634-a860-be2fdbd64e66" in namespace "downward-api-9775" to be "Succeeded or Failed" May 6 20:06:42.548: INFO: Pod "downward-api-23a6628b-c362-4634-a860-be2fdbd64e66": Phase="Pending", Reason="", readiness=false. Elapsed: 14.354063ms May 6 20:06:44.553: INFO: Pod "downward-api-23a6628b-c362-4634-a860-be2fdbd64e66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019307182s May 6 20:06:46.557: INFO: Pod "downward-api-23a6628b-c362-4634-a860-be2fdbd64e66": Phase="Running", Reason="", readiness=true. Elapsed: 4.022636129s May 6 20:06:48.560: INFO: Pod "downward-api-23a6628b-c362-4634-a860-be2fdbd64e66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026245208s STEP: Saw pod success May 6 20:06:48.560: INFO: Pod "downward-api-23a6628b-c362-4634-a860-be2fdbd64e66" satisfied condition "Succeeded or Failed" May 6 20:06:48.564: INFO: Trying to get logs from node latest-worker pod downward-api-23a6628b-c362-4634-a860-be2fdbd64e66 container dapi-container: STEP: delete the pod May 6 20:06:48.840: INFO: Waiting for pod downward-api-23a6628b-c362-4634-a860-be2fdbd64e66 to disappear May 6 20:06:48.887: INFO: Pod downward-api-23a6628b-c362-4634-a860-be2fdbd64e66 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:06:48.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9775" for this suite. • [SLOW TEST:6.662 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":288,"completed":54,"skipped":961,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:06:48.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 20:06:49.043: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 6 20:06:52.031: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4757 create -f -' May 6 20:06:57.320: INFO: stderr: "" May 6 20:06:57.320: INFO: stdout: "e2e-test-crd-publish-openapi-7558-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 6 20:06:57.320: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4757 delete e2e-test-crd-publish-openapi-7558-crds test-cr' May 6 20:06:57.439: INFO: stderr: "" May 6 20:06:57.439: INFO: stdout: "e2e-test-crd-publish-openapi-7558-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 6 20:06:57.439: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4757 apply -f -' May 6 20:06:57.693: INFO: stderr: "" May 6 20:06:57.693: INFO: stdout: "e2e-test-crd-publish-openapi-7558-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 6 20:06:57.693: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4757 delete e2e-test-crd-publish-openapi-7558-crds test-cr' May 6 20:06:57.812: INFO: stderr: "" May 6 20:06:57.812: INFO: stdout: "e2e-test-crd-publish-openapi-7558-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 6 20:06:57.812: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7558-crds' May 6 20:06:58.079: INFO: stderr: "" May 6 20:06:58.079: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7558-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:07:00.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4757" for this suite. • [SLOW TEST:11.124 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":288,"completed":55,"skipped":971,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:07:00.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 6 20:07:10.488: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 20:07:10.493: INFO: Pod pod-with-poststart-http-hook still exists May 6 20:07:12.494: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 20:07:12.498: INFO: Pod pod-with-poststart-http-hook still exists May 6 20:07:14.494: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 20:07:14.499: INFO: Pod pod-with-poststart-http-hook still exists May 6 20:07:16.494: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 20:07:16.499: INFO: Pod pod-with-poststart-http-hook still exists May 6 20:07:18.494: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 20:07:19.129: INFO: Pod pod-with-poststart-http-hook still exists May 6 20:07:20.494: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 20:07:20.497: INFO: Pod pod-with-poststart-http-hook still exists May 6 20:07:22.494: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 20:07:22.498: INFO: Pod pod-with-poststart-http-hook still exists May 6 20:07:24.494: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 20:07:24.498: INFO: Pod pod-with-poststart-http-hook still exists May 6 20:07:26.494: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 20:07:26.608: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:07:26.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9245" for this suite. • [SLOW TEST:26.615 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":288,"completed":56,"skipped":990,"failed":0} [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:07:26.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2681 STEP: creating service affinity-clusterip-transition in namespace services-2681 STEP: creating replication controller affinity-clusterip-transition in namespace services-2681 I0506 20:07:27.512408 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-2681, replica count: 3 I0506 20:07:30.562830 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 20:07:33.563088 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 20:07:36.563309 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 20:07:36.793: INFO: Creating new exec pod May 6 20:07:44.072: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2681 execpod-affinityqc8rs -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 6 20:07:44.322: INFO: stderr: "I0506 20:07:44.205420 789 log.go:172] (0xc0009891e0) (0xc000b646e0) Create stream\nI0506 20:07:44.205462 789 log.go:172] (0xc0009891e0) (0xc000b646e0) Stream added, broadcasting: 1\nI0506 20:07:44.208707 789 log.go:172] (0xc0009891e0) Reply frame received for 1\nI0506 20:07:44.208733 789 log.go:172] (0xc0009891e0) (0xc000802c80) Create stream\nI0506 20:07:44.208741 789 log.go:172] (0xc0009891e0) (0xc000802c80) Stream added, broadcasting: 3\nI0506 20:07:44.209677 789 log.go:172] (0xc0009891e0) Reply frame received for 3\nI0506 20:07:44.209746 789 log.go:172] (0xc0009891e0) (0xc000803c20) Create stream\nI0506 20:07:44.209779 789 log.go:172] (0xc0009891e0) (0xc000803c20) Stream added, broadcasting: 5\nI0506 20:07:44.210515 789 log.go:172] (0xc0009891e0) Reply frame received for 5\nI0506 20:07:44.313873 789 log.go:172] (0xc0009891e0) Data frame received for 5\nI0506 20:07:44.313908 789 log.go:172] (0xc000803c20) (5) Data frame handling\nI0506 20:07:44.313923 789 log.go:172] (0xc000803c20) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0506 20:07:44.314405 789 log.go:172] (0xc0009891e0) Data frame received for 5\nI0506 20:07:44.314426 789 log.go:172] (0xc000803c20) (5) Data frame handling\nI0506 20:07:44.314445 789 log.go:172] (0xc000803c20) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0506 20:07:44.315405 789 log.go:172] (0xc0009891e0) Data frame received for 3\nI0506 20:07:44.315459 789 log.go:172] (0xc000802c80) (3) Data frame handling\nI0506 20:07:44.315498 789 log.go:172] (0xc0009891e0) Data frame received for 5\nI0506 20:07:44.315516 789 log.go:172] (0xc000803c20) (5) Data frame handling\nI0506 20:07:44.316735 789 log.go:172] (0xc0009891e0) Data frame received for 1\nI0506 20:07:44.316756 789 log.go:172] (0xc000b646e0) (1) Data frame handling\nI0506 20:07:44.316778 789 log.go:172] (0xc000b646e0) (1) Data frame sent\nI0506 20:07:44.316805 789 log.go:172] (0xc0009891e0) (0xc000b646e0) Stream removed, broadcasting: 1\nI0506 20:07:44.316937 789 log.go:172] (0xc0009891e0) Go away received\nI0506 20:07:44.317088 789 log.go:172] (0xc0009891e0) (0xc000b646e0) Stream removed, broadcasting: 1\nI0506 20:07:44.317249 789 log.go:172] (0xc0009891e0) (0xc000802c80) Stream removed, broadcasting: 3\nI0506 20:07:44.317272 789 log.go:172] (0xc0009891e0) (0xc000803c20) Stream removed, broadcasting: 5\n" May 6 20:07:44.322: INFO: stdout: "" May 6 20:07:44.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2681 execpod-affinityqc8rs -- /bin/sh -x -c nc -zv -t -w 2 10.111.106.141 80' May 6 20:07:44.497: INFO: stderr: "I0506 20:07:44.444935 810 log.go:172] (0xc00003ad10) (0xc0005f0460) Create stream\nI0506 20:07:44.444985 810 log.go:172] (0xc00003ad10) (0xc0005f0460) Stream added, broadcasting: 1\nI0506 20:07:44.447131 810 log.go:172] (0xc00003ad10) Reply frame received for 1\nI0506 20:07:44.447162 810 log.go:172] (0xc00003ad10) (0xc00043cc80) Create stream\nI0506 20:07:44.447173 810 log.go:172] (0xc00003ad10) (0xc00043cc80) Stream added, broadcasting: 3\nI0506 20:07:44.447833 810 log.go:172] (0xc00003ad10) Reply frame received for 3\nI0506 20:07:44.447862 810 log.go:172] (0xc00003ad10) (0xc00030a000) Create stream\nI0506 20:07:44.447871 810 log.go:172] (0xc00003ad10) (0xc00030a000) Stream added, broadcasting: 5\nI0506 20:07:44.448541 810 log.go:172] (0xc00003ad10) Reply frame received for 5\nI0506 20:07:44.491605 810 log.go:172] (0xc00003ad10) Data frame received for 3\nI0506 20:07:44.491634 810 log.go:172] (0xc00043cc80) (3) Data frame handling\nI0506 20:07:44.491681 810 log.go:172] (0xc00003ad10) Data frame received for 5\nI0506 20:07:44.491717 810 log.go:172] (0xc00030a000) (5) Data frame handling\nI0506 20:07:44.491746 810 log.go:172] (0xc00030a000) (5) Data frame sent\nI0506 20:07:44.491790 810 log.go:172] (0xc00003ad10) Data frame received for 5\nI0506 20:07:44.491801 810 log.go:172] (0xc00030a000) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.106.141 80\nConnection to 10.111.106.141 80 port [tcp/http] succeeded!\nI0506 20:07:44.492716 810 log.go:172] (0xc00003ad10) Data frame received for 1\nI0506 20:07:44.492732 810 log.go:172] (0xc0005f0460) (1) Data frame handling\nI0506 20:07:44.492742 810 log.go:172] (0xc0005f0460) (1) Data frame sent\nI0506 20:07:44.492764 810 log.go:172] (0xc00003ad10) (0xc0005f0460) Stream removed, broadcasting: 1\nI0506 20:07:44.492968 810 log.go:172] (0xc00003ad10) Go away received\nI0506 20:07:44.493017 810 log.go:172] (0xc00003ad10) (0xc0005f0460) Stream removed, broadcasting: 1\nI0506 20:07:44.493036 810 log.go:172] (0xc00003ad10) (0xc00043cc80) Stream removed, broadcasting: 3\nI0506 20:07:44.493048 810 log.go:172] (0xc00003ad10) (0xc00030a000) Stream removed, broadcasting: 5\n" May 6 20:07:44.497: INFO: stdout: "" May 6 20:07:44.505: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2681 execpod-affinityqc8rs -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.111.106.141:80/ ; done' May 6 20:07:44.822: INFO: stderr: "I0506 20:07:44.647177 831 log.go:172] (0xc00043a0b0) (0xc0004aba40) Create stream\nI0506 20:07:44.647250 831 log.go:172] (0xc00043a0b0) (0xc0004aba40) Stream added, broadcasting: 1\nI0506 20:07:44.650665 831 log.go:172] (0xc00043a0b0) Reply frame received for 1\nI0506 20:07:44.650707 831 log.go:172] (0xc00043a0b0) (0xc0005401e0) Create stream\nI0506 20:07:44.650736 831 log.go:172] (0xc00043a0b0) (0xc0005401e0) Stream added, broadcasting: 3\nI0506 20:07:44.651593 831 log.go:172] (0xc00043a0b0) Reply frame received for 3\nI0506 20:07:44.651625 831 log.go:172] (0xc00043a0b0) (0xc000482c80) Create stream\nI0506 20:07:44.651637 831 log.go:172] (0xc00043a0b0) (0xc000482c80) Stream added, broadcasting: 5\nI0506 20:07:44.652528 831 log.go:172] (0xc00043a0b0) Reply frame received for 5\nI0506 20:07:44.720774 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.720806 831 log.go:172] (0xc000482c80) (5) Data frame handling\nI0506 20:07:44.720822 831 log.go:172] (0xc000482c80) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:44.720842 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.720851 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.720862 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.725742 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.725763 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.725773 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.725951 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.725995 831 log.go:172] (0xc000482c80) (5) Data frame handling\nI0506 20:07:44.726012 831 log.go:172] (0xc000482c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:44.726032 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.726043 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.726067 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.731755 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.731781 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.731814 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.732180 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.732218 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.732235 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.732259 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.732273 831 log.go:172] (0xc000482c80) (5) Data frame handling\nI0506 20:07:44.732299 831 log.go:172] (0xc000482c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:44.738083 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.738103 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.738123 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.738792 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.738816 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.738824 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.738841 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.738846 831 log.go:172] (0xc000482c80) (5) Data frame handling\nI0506 20:07:44.738854 831 log.go:172] (0xc000482c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:44.743370 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.743387 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.743409 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.743676 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.743692 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.743700 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.743713 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.743718 831 log.go:172] (0xc000482c80) (5) Data frame handling\nI0506 20:07:44.743733 831 log.go:172] (0xc000482c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:44.750295 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.750314 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.750330 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.751027 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.751041 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.751065 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.751099 831 log.go:172] (0xc000482c80) (5) Data frame handling\nI0506 20:07:44.751119 831 log.go:172] (0xc000482c80) (5) Data frame sent\nI0506 20:07:44.751140 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.751153 831 log.go:172] (0xc000482c80) (5) Data frame handling\nI0506 20:07:44.751184 831 log.go:172] (0xc0005401e0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:44.751219 831 log.go:172] (0xc000482c80) (5) Data frame sent\nI0506 20:07:44.758713 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.758741 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.758759 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.759380 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.759415 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.759436 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.759465 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.759482 831 log.go:172] (0xc000482c80) (5) Data frame handling\nI0506 20:07:44.759500 831 log.go:172] (0xc000482c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:44.766910 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.766940 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.766963 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.767500 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.767546 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.767571 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.767600 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.767621 831 log.go:172] (0xc000482c80) (5) Data frame handling\nI0506 20:07:44.767654 831 log.go:172] (0xc000482c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:44.775568 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.775590 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.775605 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.775903 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.775917 831 log.go:172] (0xc000482c80) (5) Data frame handling\nI0506 20:07:44.775944 831 log.go:172] (0xc000482c80) (5) Data frame sent\nI0506 20:07:44.775952 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.775959 831 log.go:172] (0xc000482c80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:44.775990 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.776016 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.776035 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.776072 831 log.go:172] (0xc000482c80) (5) Data frame sent\nI0506 20:07:44.781069 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.781085 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.781101 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.781700 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.781713 831 log.go:172] (0xc000482c80) (5) Data frame handling\nI0506 20:07:44.781732 831 log.go:172] (0xc000482c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0506 20:07:44.781796 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.781816 831 log.go:172] (0xc000482c80) (5) Data frame handling\n 2 http://10.111.106.141:80/\nI0506 20:07:44.781840 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.781860 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.781869 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.781885 831 log.go:172] (0xc000482c80) (5) Data frame sent\nI0506 20:07:44.786601 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.786631 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.786658 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.787007 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.787041 831 log.go:172] (0xc000482c80) (5) Data frame handling\nI0506 20:07:44.787064 831 log.go:172] (0xc000482c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:44.787096 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.787115 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.787141 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.791072 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.791090 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.791104 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.791437 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.791452 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.791465 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.791519 831 log.go:172] (0xc000482c80) (5) Data frame handling\nI0506 20:07:44.791546 831 log.go:172] (0xc000482c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:44.791562 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.795185 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.795203 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.795218 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.795658 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.795682 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.795712 831 log.go:172] (0xc000482c80) (5) Data frame handling\nI0506 20:07:44.795750 831 log.go:172] (0xc000482c80) (5) Data frame sent\nI0506 20:07:44.795773 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.795794 831 log.go:172] (0xc000482c80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/I0506 20:07:44.795814 831 log.go:172] (0xc0005401e0) (3) Data frame handling\n\nI0506 20:07:44.795831 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.795865 831 log.go:172] (0xc000482c80) (5) Data frame sent\nI0506 20:07:44.799750 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.799776 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.799829 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.800110 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.800138 831 log.go:172] (0xc000482c80) (5) Data frame handling\nI0506 20:07:44.800166 831 log.go:172] (0xc000482c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/I0506 20:07:44.800196 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.800228 831 log.go:172] (0xc000482c80) (5) Data frame handling\nI0506 20:07:44.800245 831 log.go:172] (0xc000482c80) (5) Data frame sent\nI0506 20:07:44.800254 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.800264 831 log.go:172] (0xc0005401e0) (3) Data frame handling\n\nI0506 20:07:44.800272 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.804528 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.804552 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.804570 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.805345 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.805364 831 log.go:172] (0xc000482c80) (5) Data frame handling\nI0506 20:07:44.805369 831 log.go:172] (0xc000482c80) (5) Data frame sent\nI0506 20:07:44.805374 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.805377 831 log.go:172] (0xc000482c80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:44.805391 831 log.go:172] (0xc000482c80) (5) Data frame sent\nI0506 20:07:44.805396 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.805401 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.805406 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.809858 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.809881 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.809900 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.810283 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.810292 831 log.go:172] (0xc000482c80) (5) Data frame handling\nI0506 20:07:44.810300 831 log.go:172] (0xc000482c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0506 20:07:44.810363 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.810371 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.810376 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.810389 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.810404 831 log.go:172] (0xc000482c80) (5) Data frame handling\nI0506 20:07:44.810420 831 log.go:172] (0xc000482c80) (5) Data frame sent\n http://10.111.106.141:80/\nI0506 20:07:44.814244 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.814263 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.814269 831 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:07:44.814928 831 log.go:172] (0xc00043a0b0) Data frame received for 3\nI0506 20:07:44.814942 831 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:07:44.815049 831 log.go:172] (0xc00043a0b0) Data frame received for 5\nI0506 20:07:44.815070 831 log.go:172] (0xc000482c80) (5) Data frame handling\nI0506 20:07:44.816788 831 log.go:172] (0xc00043a0b0) Data frame received for 1\nI0506 20:07:44.816801 831 log.go:172] (0xc0004aba40) (1) Data frame handling\nI0506 20:07:44.816807 831 log.go:172] (0xc0004aba40) (1) Data frame sent\nI0506 20:07:44.816944 831 log.go:172] (0xc00043a0b0) (0xc0004aba40) Stream removed, broadcasting: 1\nI0506 20:07:44.817000 831 log.go:172] (0xc00043a0b0) Go away received\nI0506 20:07:44.817517 831 log.go:172] (0xc00043a0b0) (0xc0004aba40) Stream removed, broadcasting: 1\nI0506 20:07:44.817537 831 log.go:172] (0xc00043a0b0) (0xc0005401e0) Stream removed, broadcasting: 3\nI0506 20:07:44.817546 831 log.go:172] (0xc00043a0b0) (0xc000482c80) Stream removed, broadcasting: 5\n" May 6 20:07:44.823: INFO: stdout: "\naffinity-clusterip-transition-5zwgb\naffinity-clusterip-transition-5zwgb\naffinity-clusterip-transition-kr26q\naffinity-clusterip-transition-575tv\naffinity-clusterip-transition-5zwgb\naffinity-clusterip-transition-kr26q\naffinity-clusterip-transition-5zwgb\naffinity-clusterip-transition-575tv\naffinity-clusterip-transition-kr26q\naffinity-clusterip-transition-5zwgb\naffinity-clusterip-transition-575tv\naffinity-clusterip-transition-5zwgb\naffinity-clusterip-transition-5zwgb\naffinity-clusterip-transition-575tv\naffinity-clusterip-transition-5zwgb\naffinity-clusterip-transition-kr26q" May 6 20:07:44.823: INFO: Received response from host: May 6 20:07:44.823: INFO: Received response from host: affinity-clusterip-transition-5zwgb May 6 20:07:44.823: INFO: Received response from host: affinity-clusterip-transition-5zwgb May 6 20:07:44.823: INFO: Received response from host: affinity-clusterip-transition-kr26q May 6 20:07:44.823: INFO: Received response from host: affinity-clusterip-transition-575tv May 6 20:07:44.823: INFO: Received response from host: affinity-clusterip-transition-5zwgb May 6 20:07:44.823: INFO: Received response from host: affinity-clusterip-transition-kr26q May 6 20:07:44.823: INFO: Received response from host: affinity-clusterip-transition-5zwgb May 6 20:07:44.823: INFO: Received response from host: affinity-clusterip-transition-575tv May 6 20:07:44.823: INFO: Received response from host: affinity-clusterip-transition-kr26q May 6 20:07:44.823: INFO: Received response from host: affinity-clusterip-transition-5zwgb May 6 20:07:44.823: INFO: Received response from host: affinity-clusterip-transition-575tv May 6 20:07:44.823: INFO: Received response from host: affinity-clusterip-transition-5zwgb May 6 20:07:44.823: INFO: Received response from host: affinity-clusterip-transition-5zwgb May 6 20:07:44.823: INFO: Received response from host: affinity-clusterip-transition-575tv May 6 20:07:44.823: INFO: Received response from host: affinity-clusterip-transition-5zwgb May 6 20:07:44.823: INFO: Received response from host: affinity-clusterip-transition-kr26q May 6 20:07:44.831: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2681 execpod-affinityqc8rs -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.111.106.141:80/ ; done' May 6 20:07:45.107: INFO: stderr: "I0506 20:07:44.952451 854 log.go:172] (0xc0009b1290) (0xc0009be320) Create stream\nI0506 20:07:44.952515 854 log.go:172] (0xc0009b1290) (0xc0009be320) Stream added, broadcasting: 1\nI0506 20:07:44.956609 854 log.go:172] (0xc0009b1290) Reply frame received for 1\nI0506 20:07:44.956661 854 log.go:172] (0xc0009b1290) (0xc0004cc140) Create stream\nI0506 20:07:44.956676 854 log.go:172] (0xc0009b1290) (0xc0004cc140) Stream added, broadcasting: 3\nI0506 20:07:44.957642 854 log.go:172] (0xc0009b1290) Reply frame received for 3\nI0506 20:07:44.957661 854 log.go:172] (0xc0009b1290) (0xc0003f8c80) Create stream\nI0506 20:07:44.957667 854 log.go:172] (0xc0009b1290) (0xc0003f8c80) Stream added, broadcasting: 5\nI0506 20:07:44.958396 854 log.go:172] (0xc0009b1290) Reply frame received for 5\nI0506 20:07:45.011962 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.011988 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.012006 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.012035 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.012045 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\nI0506 20:07:45.012053 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:45.017867 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.017892 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.017915 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.018332 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.018368 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\nI0506 20:07:45.018384 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:45.018408 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.018421 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.018436 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.022348 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.022379 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.022398 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.022916 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.022927 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.022933 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.022950 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.022983 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\nI0506 20:07:45.023010 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:45.029923 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.029937 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.029943 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.030406 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.030425 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\nI0506 20:07:45.030431 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:45.030439 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.030451 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.030459 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.037370 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.037398 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.037427 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.037721 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.037747 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:45.037777 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.037799 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.037810 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.037828 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\nI0506 20:07:45.040591 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.040620 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.040641 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.041762 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.041789 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.041809 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.041831 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\nI0506 20:07:45.041842 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:45.041858 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.045108 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.045288 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.045305 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.046103 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.046126 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.046149 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.046173 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.046185 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\nI0506 20:07:45.046217 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:45.051967 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.051994 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.052022 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.052373 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.052399 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.052413 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.052446 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.052457 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\nI0506 20:07:45.052480 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:45.056049 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.056079 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.056099 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.056466 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.056480 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.056498 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.056539 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\nI0506 20:07:45.056560 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\nI0506 20:07:45.056574 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.056586 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:45.056611 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\nI0506 20:07:45.056630 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.063033 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.063062 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.063092 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.063453 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.063472 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.063485 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.063503 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.063513 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\nI0506 20:07:45.063532 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\nI0506 20:07:45.063546 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.063557 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:45.063581 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\nI0506 20:07:45.067918 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.067944 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.067963 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.068677 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.068694 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.068715 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\nI0506 20:07:45.068723 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\nI0506 20:07:45.068729 854 log.go:172] (0xc0009b1290) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2I0506 20:07:45.068736 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\nI0506 20:07:45.068778 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\n http://10.111.106.141:80/\nI0506 20:07:45.068796 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.068819 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.072953 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.072968 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.072990 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.073586 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.073600 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.073613 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.073638 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\nI0506 20:07:45.073653 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:45.073670 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.078434 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.078458 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.078495 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.078849 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.078881 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\nI0506 20:07:45.078903 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0506 20:07:45.078917 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.078931 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.078947 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.078982 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\nI0506 20:07:45.079004 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\n http://10.111.106.141:80/\nI0506 20:07:45.079025 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.082963 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.082977 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.082986 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.083390 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.083411 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\nI0506 20:07:45.083436 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:45.083683 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.083699 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.083716 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.088050 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.088076 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.088087 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.088607 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.088630 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.088642 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.088656 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.088664 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\nI0506 20:07:45.088672 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:45.093800 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.093825 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.093854 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.094334 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.094352 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\nI0506 20:07:45.094371 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\nI0506 20:07:45.094386 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.094401 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.106.141:80/\nI0506 20:07:45.094423 854 log.go:172] (0xc0003f8c80) (5) Data frame sent\nI0506 20:07:45.094504 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.094533 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.094559 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.099349 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.099363 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.099371 854 log.go:172] (0xc0004cc140) (3) Data frame sent\nI0506 20:07:45.100289 854 log.go:172] (0xc0009b1290) Data frame received for 5\nI0506 20:07:45.100318 854 log.go:172] (0xc0003f8c80) (5) Data frame handling\nI0506 20:07:45.100461 854 log.go:172] (0xc0009b1290) Data frame received for 3\nI0506 20:07:45.100495 854 log.go:172] (0xc0004cc140) (3) Data frame handling\nI0506 20:07:45.102151 854 log.go:172] (0xc0009b1290) Data frame received for 1\nI0506 20:07:45.102177 854 log.go:172] (0xc0009be320) (1) Data frame handling\nI0506 20:07:45.102189 854 log.go:172] (0xc0009be320) (1) Data frame sent\nI0506 20:07:45.102204 854 log.go:172] (0xc0009b1290) (0xc0009be320) Stream removed, broadcasting: 1\nI0506 20:07:45.102334 854 log.go:172] (0xc0009b1290) Go away received\nI0506 20:07:45.102578 854 log.go:172] (0xc0009b1290) (0xc0009be320) Stream removed, broadcasting: 1\nI0506 20:07:45.102596 854 log.go:172] (0xc0009b1290) (0xc0004cc140) Stream removed, broadcasting: 3\nI0506 20:07:45.102607 854 log.go:172] (0xc0009b1290) (0xc0003f8c80) Stream removed, broadcasting: 5\n" May 6 20:07:45.107: INFO: stdout: "\naffinity-clusterip-transition-kr26q\naffinity-clusterip-transition-kr26q\naffinity-clusterip-transition-kr26q\naffinity-clusterip-transition-kr26q\naffinity-clusterip-transition-kr26q\naffinity-clusterip-transition-kr26q\naffinity-clusterip-transition-kr26q\naffinity-clusterip-transition-kr26q\naffinity-clusterip-transition-kr26q\naffinity-clusterip-transition-kr26q\naffinity-clusterip-transition-kr26q\naffinity-clusterip-transition-kr26q\naffinity-clusterip-transition-kr26q\naffinity-clusterip-transition-kr26q\naffinity-clusterip-transition-kr26q\naffinity-clusterip-transition-kr26q" May 6 20:07:45.107: INFO: Received response from host: May 6 20:07:45.107: INFO: Received response from host: affinity-clusterip-transition-kr26q May 6 20:07:45.107: INFO: Received response from host: affinity-clusterip-transition-kr26q May 6 20:07:45.107: INFO: Received response from host: affinity-clusterip-transition-kr26q May 6 20:07:45.107: INFO: Received response from host: affinity-clusterip-transition-kr26q May 6 20:07:45.107: INFO: Received response from host: affinity-clusterip-transition-kr26q May 6 20:07:45.107: INFO: Received response from host: affinity-clusterip-transition-kr26q May 6 20:07:45.107: INFO: Received response from host: affinity-clusterip-transition-kr26q May 6 20:07:45.107: INFO: Received response from host: affinity-clusterip-transition-kr26q May 6 20:07:45.107: INFO: Received response from host: affinity-clusterip-transition-kr26q May 6 20:07:45.107: INFO: Received response from host: affinity-clusterip-transition-kr26q May 6 20:07:45.107: INFO: Received response from host: affinity-clusterip-transition-kr26q May 6 20:07:45.107: INFO: Received response from host: affinity-clusterip-transition-kr26q May 6 20:07:45.107: INFO: Received response from host: affinity-clusterip-transition-kr26q May 6 20:07:45.107: INFO: Received response from host: affinity-clusterip-transition-kr26q May 6 20:07:45.107: INFO: Received response from host: affinity-clusterip-transition-kr26q May 6 20:07:45.107: INFO: Received response from host: affinity-clusterip-transition-kr26q May 6 20:07:45.107: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-2681, will wait for the garbage collector to delete the pods May 6 20:07:45.302: INFO: Deleting ReplicationController affinity-clusterip-transition took: 84.598554ms May 6 20:07:45.703: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 400.297888ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:07:55.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2681" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:29.085 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":57,"skipped":990,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:07:55.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3667.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3667.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3667.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3667.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3667.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3667.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 20:08:04.282: INFO: DNS probes using dns-3667/dns-test-bd4a407b-e6e2-4d35-b491-826285d2f80b succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:08:04.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3667" for this suite. • [SLOW TEST:9.501 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":288,"completed":58,"skipped":994,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:08:05.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command May 6 20:08:05.854: INFO: Waiting up to 5m0s for pod "var-expansion-d8f82a0d-9a3b-4dbc-9ae5-d69d5290e896" in namespace "var-expansion-3339" to be "Succeeded or Failed" May 6 20:08:05.928: INFO: Pod "var-expansion-d8f82a0d-9a3b-4dbc-9ae5-d69d5290e896": Phase="Pending", Reason="", readiness=false. Elapsed: 73.79463ms May 6 20:08:08.144: INFO: Pod "var-expansion-d8f82a0d-9a3b-4dbc-9ae5-d69d5290e896": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289881489s May 6 20:08:10.252: INFO: Pod "var-expansion-d8f82a0d-9a3b-4dbc-9ae5-d69d5290e896": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397805782s May 6 20:08:12.257: INFO: Pod "var-expansion-d8f82a0d-9a3b-4dbc-9ae5-d69d5290e896": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.403112799s STEP: Saw pod success May 6 20:08:12.257: INFO: Pod "var-expansion-d8f82a0d-9a3b-4dbc-9ae5-d69d5290e896" satisfied condition "Succeeded or Failed" May 6 20:08:12.260: INFO: Trying to get logs from node latest-worker2 pod var-expansion-d8f82a0d-9a3b-4dbc-9ae5-d69d5290e896 container dapi-container: STEP: delete the pod May 6 20:08:12.564: INFO: Waiting for pod var-expansion-d8f82a0d-9a3b-4dbc-9ae5-d69d5290e896 to disappear May 6 20:08:12.610: INFO: Pod var-expansion-d8f82a0d-9a3b-4dbc-9ae5-d69d5290e896 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:08:12.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3339" for this suite. • [SLOW TEST:7.430 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":288,"completed":59,"skipped":1009,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:08:12.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-4138d8a9-7c1b-4c35-b3a7-a2a76a2bf4ad STEP: Creating a pod to test consume configMaps May 6 20:08:12.788: INFO: Waiting up to 5m0s for pod "pod-configmaps-8f930bfa-f994-44ec-8251-0ac9b0af44ad" in namespace "configmap-7679" to be "Succeeded or Failed" May 6 20:08:12.826: INFO: Pod "pod-configmaps-8f930bfa-f994-44ec-8251-0ac9b0af44ad": Phase="Pending", Reason="", readiness=false. Elapsed: 38.181749ms May 6 20:08:14.886: INFO: Pod "pod-configmaps-8f930bfa-f994-44ec-8251-0ac9b0af44ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097739063s May 6 20:08:16.889: INFO: Pod "pod-configmaps-8f930bfa-f994-44ec-8251-0ac9b0af44ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10151313s STEP: Saw pod success May 6 20:08:16.890: INFO: Pod "pod-configmaps-8f930bfa-f994-44ec-8251-0ac9b0af44ad" satisfied condition "Succeeded or Failed" May 6 20:08:16.892: INFO: Trying to get logs from node latest-worker pod pod-configmaps-8f930bfa-f994-44ec-8251-0ac9b0af44ad container configmap-volume-test: STEP: delete the pod May 6 20:08:17.113: INFO: Waiting for pod pod-configmaps-8f930bfa-f994-44ec-8251-0ac9b0af44ad to disappear May 6 20:08:17.155: INFO: Pod pod-configmaps-8f930bfa-f994-44ec-8251-0ac9b0af44ad no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:08:17.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7679" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":60,"skipped":1009,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:08:17.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted May 6 20:08:30.404: INFO: 5 pods remaining May 6 20:08:30.404: INFO: 5 pods has nil DeletionTimestamp May 6 20:08:30.404: INFO: STEP: Gathering metrics W0506 20:08:34.979790 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 20:08:34.979: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:08:34.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3506" for this suite. • [SLOW TEST:17.776 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":288,"completed":61,"skipped":1025,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:08:34.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info May 6 20:08:35.028: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' May 6 20:08:35.129: INFO: stderr: "" May 6 20:08:35.129: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:08:35.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6423" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":288,"completed":62,"skipped":1043,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:08:35.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 6 20:08:35.286: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8d2a156e-7d9f-4fbf-9837-c59e8db30b0f" in namespace "projected-9589" to be "Succeeded or Failed" May 6 20:08:35.307: INFO: Pod "downwardapi-volume-8d2a156e-7d9f-4fbf-9837-c59e8db30b0f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.798331ms May 6 20:08:37.393: INFO: Pod "downwardapi-volume-8d2a156e-7d9f-4fbf-9837-c59e8db30b0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10738791s May 6 20:08:39.438: INFO: Pod "downwardapi-volume-8d2a156e-7d9f-4fbf-9837-c59e8db30b0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.151435189s STEP: Saw pod success May 6 20:08:39.438: INFO: Pod "downwardapi-volume-8d2a156e-7d9f-4fbf-9837-c59e8db30b0f" satisfied condition "Succeeded or Failed" May 6 20:08:39.441: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8d2a156e-7d9f-4fbf-9837-c59e8db30b0f container client-container: STEP: delete the pod May 6 20:08:39.524: INFO: Waiting for pod downwardapi-volume-8d2a156e-7d9f-4fbf-9837-c59e8db30b0f to disappear May 6 20:08:39.628: INFO: Pod downwardapi-volume-8d2a156e-7d9f-4fbf-9837-c59e8db30b0f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:08:39.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9589" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":63,"skipped":1058,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:08:39.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-5550 STEP: creating a selector STEP: Creating the service pods in kubernetes May 6 20:08:39.763: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 6 20:08:39.910: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 20:08:42.229: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 20:08:43.961: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 20:08:46.115: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:08:47.977: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:08:49.914: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:08:51.915: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:08:53.914: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:08:55.915: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:08:57.915: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:08:59.915: INFO: The status of Pod netserver-0 is Running (Ready = true) May 6 20:08:59.919: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 6 20:09:03.947: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.160:8080/dial?request=hostname&protocol=udp&host=10.244.1.70&port=8081&tries=1'] Namespace:pod-network-test-5550 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:09:03.947: INFO: >>> kubeConfig: /root/.kube/config I0506 20:09:03.984067 7 log.go:172] (0xc002d40000) (0xc0023c66e0) Create stream I0506 20:09:03.984097 7 log.go:172] (0xc002d40000) (0xc0023c66e0) Stream added, broadcasting: 1 I0506 20:09:03.986116 7 log.go:172] (0xc002d40000) Reply frame received for 1 I0506 20:09:03.986156 7 log.go:172] (0xc002d40000) (0xc0023c68c0) Create stream I0506 20:09:03.986171 7 log.go:172] (0xc002d40000) (0xc0023c68c0) Stream added, broadcasting: 3 I0506 20:09:03.987188 7 log.go:172] (0xc002d40000) Reply frame received for 3 I0506 20:09:03.987226 7 log.go:172] (0xc002d40000) (0xc002a10320) Create stream I0506 20:09:03.987240 7 log.go:172] (0xc002d40000) (0xc002a10320) Stream added, broadcasting: 5 I0506 20:09:03.988155 7 log.go:172] (0xc002d40000) Reply frame received for 5 I0506 20:09:04.057027 7 log.go:172] (0xc002d40000) Data frame received for 3 I0506 20:09:04.057066 7 log.go:172] (0xc0023c68c0) (3) Data frame handling I0506 20:09:04.057107 7 log.go:172] (0xc0023c68c0) (3) Data frame sent I0506 20:09:04.058080 7 log.go:172] (0xc002d40000) Data frame received for 3 I0506 20:09:04.058093 7 log.go:172] (0xc0023c68c0) (3) Data frame handling I0506 20:09:04.058242 7 log.go:172] (0xc002d40000) Data frame received for 5 I0506 20:09:04.058253 7 log.go:172] (0xc002a10320) (5) Data frame handling I0506 20:09:04.059877 7 log.go:172] (0xc002d40000) Data frame received for 1 I0506 20:09:04.059892 7 log.go:172] (0xc0023c66e0) (1) Data frame handling I0506 20:09:04.059907 7 log.go:172] (0xc0023c66e0) (1) Data frame sent I0506 20:09:04.059938 7 log.go:172] (0xc002d40000) (0xc0023c66e0) Stream removed, broadcasting: 1 I0506 20:09:04.060073 7 log.go:172] (0xc002d40000) Go away received I0506 20:09:04.060171 7 log.go:172] (0xc002d40000) (0xc0023c66e0) Stream removed, broadcasting: 1 I0506 20:09:04.060182 7 log.go:172] (0xc002d40000) (0xc0023c68c0) Stream removed, broadcasting: 3 I0506 20:09:04.060188 7 log.go:172] (0xc002d40000) (0xc002a10320) Stream removed, broadcasting: 5 May 6 20:09:04.060: INFO: Waiting for responses: map[] May 6 20:09:04.063: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.160:8080/dial?request=hostname&protocol=udp&host=10.244.2.159&port=8081&tries=1'] Namespace:pod-network-test-5550 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:09:04.063: INFO: >>> kubeConfig: /root/.kube/config I0506 20:09:04.153632 7 log.go:172] (0xc002f720b0) (0xc001426820) Create stream I0506 20:09:04.153660 7 log.go:172] (0xc002f720b0) (0xc001426820) Stream added, broadcasting: 1 I0506 20:09:04.155109 7 log.go:172] (0xc002f720b0) Reply frame received for 1 I0506 20:09:04.155144 7 log.go:172] (0xc002f720b0) (0xc0023c6aa0) Create stream I0506 20:09:04.155154 7 log.go:172] (0xc002f720b0) (0xc0023c6aa0) Stream added, broadcasting: 3 I0506 20:09:04.155866 7 log.go:172] (0xc002f720b0) Reply frame received for 3 I0506 20:09:04.155896 7 log.go:172] (0xc002f720b0) (0xc00179ac80) Create stream I0506 20:09:04.155915 7 log.go:172] (0xc002f720b0) (0xc00179ac80) Stream added, broadcasting: 5 I0506 20:09:04.156636 7 log.go:172] (0xc002f720b0) Reply frame received for 5 I0506 20:09:04.225928 7 log.go:172] (0xc002f720b0) Data frame received for 3 I0506 20:09:04.225972 7 log.go:172] (0xc0023c6aa0) (3) Data frame handling I0506 20:09:04.225989 7 log.go:172] (0xc0023c6aa0) (3) Data frame sent I0506 20:09:04.226652 7 log.go:172] (0xc002f720b0) Data frame received for 3 I0506 20:09:04.226693 7 log.go:172] (0xc0023c6aa0) (3) Data frame handling I0506 20:09:04.226720 7 log.go:172] (0xc002f720b0) Data frame received for 5 I0506 20:09:04.226732 7 log.go:172] (0xc00179ac80) (5) Data frame handling I0506 20:09:04.228377 7 log.go:172] (0xc002f720b0) Data frame received for 1 I0506 20:09:04.228400 7 log.go:172] (0xc001426820) (1) Data frame handling I0506 20:09:04.228415 7 log.go:172] (0xc001426820) (1) Data frame sent I0506 20:09:04.228433 7 log.go:172] (0xc002f720b0) (0xc001426820) Stream removed, broadcasting: 1 I0506 20:09:04.228516 7 log.go:172] (0xc002f720b0) (0xc001426820) Stream removed, broadcasting: 1 I0506 20:09:04.228539 7 log.go:172] (0xc002f720b0) (0xc0023c6aa0) Stream removed, broadcasting: 3 I0506 20:09:04.228567 7 log.go:172] (0xc002f720b0) (0xc00179ac80) Stream removed, broadcasting: 5 May 6 20:09:04.228: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 I0506 20:09:04.228670 7 log.go:172] (0xc002f720b0) Go away received May 6 20:09:04.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5550" for this suite. • [SLOW TEST:24.601 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":288,"completed":64,"skipped":1105,"failed":0} SSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:09:04.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 6 20:09:08.821: INFO: Successfully updated pod "adopt-release-jk5dk" STEP: Checking that the Job readopts the Pod May 6 20:09:08.821: INFO: Waiting up to 15m0s for pod "adopt-release-jk5dk" in namespace "job-4454" to be "adopted" May 6 20:09:08.836: INFO: Pod "adopt-release-jk5dk": Phase="Running", Reason="", readiness=true. Elapsed: 14.456609ms May 6 20:09:10.839: INFO: Pod "adopt-release-jk5dk": Phase="Running", Reason="", readiness=true. Elapsed: 2.017257732s May 6 20:09:10.839: INFO: Pod "adopt-release-jk5dk" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 6 20:09:11.438: INFO: Successfully updated pod "adopt-release-jk5dk" STEP: Checking that the Job releases the Pod May 6 20:09:11.438: INFO: Waiting up to 15m0s for pod "adopt-release-jk5dk" in namespace "job-4454" to be "released" May 6 20:09:11.647: INFO: Pod "adopt-release-jk5dk": Phase="Running", Reason="", readiness=true. Elapsed: 208.85419ms May 6 20:09:13.652: INFO: Pod "adopt-release-jk5dk": Phase="Running", Reason="", readiness=true. Elapsed: 2.21371444s May 6 20:09:13.652: INFO: Pod "adopt-release-jk5dk" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:09:13.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4454" for this suite. • [SLOW TEST:9.424 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":288,"completed":65,"skipped":1109,"failed":0} SSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:09:13.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode May 6 20:09:14.620: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6471" to be "Succeeded or Failed" May 6 20:09:14.804: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 184.617612ms May 6 20:09:16.809: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188950246s May 6 20:09:18.815: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195245537s May 6 20:09:20.868: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.248684893s May 6 20:09:22.898: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.278746686s STEP: Saw pod success May 6 20:09:22.898: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 6 20:09:22.901: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 6 20:09:22.927: INFO: Waiting for pod pod-host-path-test to disappear May 6 20:09:22.942: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:09:22.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6471" for this suite. • [SLOW TEST:9.335 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":66,"skipped":1112,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:09:22.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 20:09:23.744: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 20:09:25.753: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724392563, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724392563, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724392563, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724392563, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 20:09:27.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724392563, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724392563, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724392563, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724392563, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 20:09:30.869: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 6 20:09:30.920: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:09:30.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-766" for this suite. STEP: Destroying namespace "webhook-766-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.098 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":288,"completed":67,"skipped":1115,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:09:31.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-805/secret-test-bf5c4ea5-3d5e-4ce3-9df2-9caee6745847 STEP: Creating a pod to test consume secrets May 6 20:09:31.591: INFO: Waiting up to 5m0s for pod "pod-configmaps-c817a7a3-ec69-4b53-807f-0c27ab25e9f1" in namespace "secrets-805" to be "Succeeded or Failed" May 6 20:09:31.596: INFO: Pod "pod-configmaps-c817a7a3-ec69-4b53-807f-0c27ab25e9f1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.453784ms May 6 20:09:33.600: INFO: Pod "pod-configmaps-c817a7a3-ec69-4b53-807f-0c27ab25e9f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009109962s May 6 20:09:35.635: INFO: Pod "pod-configmaps-c817a7a3-ec69-4b53-807f-0c27ab25e9f1": Phase="Running", Reason="", readiness=true. Elapsed: 4.044172259s May 6 20:09:37.640: INFO: Pod "pod-configmaps-c817a7a3-ec69-4b53-807f-0c27ab25e9f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048956268s STEP: Saw pod success May 6 20:09:37.640: INFO: Pod "pod-configmaps-c817a7a3-ec69-4b53-807f-0c27ab25e9f1" satisfied condition "Succeeded or Failed" May 6 20:09:37.643: INFO: Trying to get logs from node latest-worker pod pod-configmaps-c817a7a3-ec69-4b53-807f-0c27ab25e9f1 container env-test: STEP: delete the pod May 6 20:09:37.720: INFO: Waiting for pod pod-configmaps-c817a7a3-ec69-4b53-807f-0c27ab25e9f1 to disappear May 6 20:09:37.729: INFO: Pod pod-configmaps-c817a7a3-ec69-4b53-807f-0c27ab25e9f1 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:09:37.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-805" for this suite. • [SLOW TEST:6.641 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":68,"skipped":1119,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:09:37.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8310.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8310.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8310.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8310.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 20:09:46.179: INFO: DNS probes using dns-test-e8629817-f740-4e8c-976b-5766663e39cd succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8310.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8310.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8310.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8310.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 20:09:56.808: INFO: File wheezy_udp@dns-test-service-3.dns-8310.svc.cluster.local from pod dns-8310/dns-test-7acf2d9d-bd39-4db3-98f0-72a539bfcf71 contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 20:09:56.811: INFO: File jessie_udp@dns-test-service-3.dns-8310.svc.cluster.local from pod dns-8310/dns-test-7acf2d9d-bd39-4db3-98f0-72a539bfcf71 contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 20:09:56.811: INFO: Lookups using dns-8310/dns-test-7acf2d9d-bd39-4db3-98f0-72a539bfcf71 failed for: [wheezy_udp@dns-test-service-3.dns-8310.svc.cluster.local jessie_udp@dns-test-service-3.dns-8310.svc.cluster.local] May 6 20:10:01.816: INFO: File wheezy_udp@dns-test-service-3.dns-8310.svc.cluster.local from pod dns-8310/dns-test-7acf2d9d-bd39-4db3-98f0-72a539bfcf71 contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 20:10:01.821: INFO: File jessie_udp@dns-test-service-3.dns-8310.svc.cluster.local from pod dns-8310/dns-test-7acf2d9d-bd39-4db3-98f0-72a539bfcf71 contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 20:10:01.821: INFO: Lookups using dns-8310/dns-test-7acf2d9d-bd39-4db3-98f0-72a539bfcf71 failed for: [wheezy_udp@dns-test-service-3.dns-8310.svc.cluster.local jessie_udp@dns-test-service-3.dns-8310.svc.cluster.local] May 6 20:10:06.820: INFO: File wheezy_udp@dns-test-service-3.dns-8310.svc.cluster.local from pod dns-8310/dns-test-7acf2d9d-bd39-4db3-98f0-72a539bfcf71 contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 20:10:06.824: INFO: File jessie_udp@dns-test-service-3.dns-8310.svc.cluster.local from pod dns-8310/dns-test-7acf2d9d-bd39-4db3-98f0-72a539bfcf71 contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 20:10:06.824: INFO: Lookups using dns-8310/dns-test-7acf2d9d-bd39-4db3-98f0-72a539bfcf71 failed for: [wheezy_udp@dns-test-service-3.dns-8310.svc.cluster.local jessie_udp@dns-test-service-3.dns-8310.svc.cluster.local] May 6 20:10:11.817: INFO: File wheezy_udp@dns-test-service-3.dns-8310.svc.cluster.local from pod dns-8310/dns-test-7acf2d9d-bd39-4db3-98f0-72a539bfcf71 contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 20:10:11.821: INFO: File jessie_udp@dns-test-service-3.dns-8310.svc.cluster.local from pod dns-8310/dns-test-7acf2d9d-bd39-4db3-98f0-72a539bfcf71 contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 20:10:11.821: INFO: Lookups using dns-8310/dns-test-7acf2d9d-bd39-4db3-98f0-72a539bfcf71 failed for: [wheezy_udp@dns-test-service-3.dns-8310.svc.cluster.local jessie_udp@dns-test-service-3.dns-8310.svc.cluster.local] May 6 20:10:16.821: INFO: DNS probes using dns-test-7acf2d9d-bd39-4db3-98f0-72a539bfcf71 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8310.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8310.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8310.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8310.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 20:10:23.891: INFO: DNS probes using dns-test-4cdd0a6b-ea7a-439d-9ae5-6ec7b189580f succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:10:24.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8310" for this suite. • [SLOW TEST:46.274 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":288,"completed":69,"skipped":1142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:10:24.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 20:10:24.415: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"43b5c416-80b5-444d-b351-1da8e47fbaf9", Controller:(*bool)(0xc003b73aa2), BlockOwnerDeletion:(*bool)(0xc003b73aa3)}} May 6 20:10:24.427: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"e8923d92-5dce-4263-a5fe-fd62b231c9e4", Controller:(*bool)(0xc0029d30fa), BlockOwnerDeletion:(*bool)(0xc0029d30fb)}} May 6 20:10:24.445: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"aaa5dc36-f4b8-4a2a-ad7e-0f0e421c421e", Controller:(*bool)(0xc003fc5a32), BlockOwnerDeletion:(*bool)(0xc003fc5a33)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:10:29.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7795" for this suite. • [SLOW TEST:5.601 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":288,"completed":70,"skipped":1195,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:10:29.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 6 20:10:29.766: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5383' May 6 20:10:30.112: INFO: stderr: "" May 6 20:10:30.112: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 6 20:10:31.138: INFO: Selector matched 1 pods for map[app:agnhost] May 6 20:10:31.138: INFO: Found 0 / 1 May 6 20:10:32.163: INFO: Selector matched 1 pods for map[app:agnhost] May 6 20:10:32.163: INFO: Found 0 / 1 May 6 20:10:33.116: INFO: Selector matched 1 pods for map[app:agnhost] May 6 20:10:33.116: INFO: Found 0 / 1 May 6 20:10:34.117: INFO: Selector matched 1 pods for map[app:agnhost] May 6 20:10:34.117: INFO: Found 1 / 1 May 6 20:10:34.117: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 6 20:10:34.121: INFO: Selector matched 1 pods for map[app:agnhost] May 6 20:10:34.121: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 6 20:10:34.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-xxxjh --namespace=kubectl-5383 -p {"metadata":{"annotations":{"x":"y"}}}' May 6 20:10:34.230: INFO: stderr: "" May 6 20:10:34.230: INFO: stdout: "pod/agnhost-master-xxxjh patched\n" STEP: checking annotations May 6 20:10:34.247: INFO: Selector matched 1 pods for map[app:agnhost] May 6 20:10:34.247: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:10:34.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5383" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":288,"completed":71,"skipped":1205,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:10:34.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 6 20:10:34.335: INFO: Waiting up to 5m0s for pod "pod-aa31b684-f684-4546-9a74-242536943c55" in namespace "emptydir-181" to be "Succeeded or Failed" May 6 20:10:34.382: INFO: Pod "pod-aa31b684-f684-4546-9a74-242536943c55": Phase="Pending", Reason="", readiness=false. Elapsed: 46.509671ms May 6 20:10:36.386: INFO: Pod "pod-aa31b684-f684-4546-9a74-242536943c55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051143923s May 6 20:10:38.390: INFO: Pod "pod-aa31b684-f684-4546-9a74-242536943c55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055241358s STEP: Saw pod success May 6 20:10:38.390: INFO: Pod "pod-aa31b684-f684-4546-9a74-242536943c55" satisfied condition "Succeeded or Failed" May 6 20:10:38.393: INFO: Trying to get logs from node latest-worker pod pod-aa31b684-f684-4546-9a74-242536943c55 container test-container: STEP: delete the pod May 6 20:10:38.423: INFO: Waiting for pod pod-aa31b684-f684-4546-9a74-242536943c55 to disappear May 6 20:10:38.438: INFO: Pod pod-aa31b684-f684-4546-9a74-242536943c55 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:10:38.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-181" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":72,"skipped":1252,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:10:38.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:10:42.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-560" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":288,"completed":73,"skipped":1257,"failed":0} ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:10:42.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:10:49.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-247" for this suite. STEP: Destroying namespace "nsdeletetest-3293" for this suite. May 6 20:10:49.198: INFO: Namespace nsdeletetest-3293 was already deleted STEP: Destroying namespace "nsdeletetest-6348" for this suite. • [SLOW TEST:6.434 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":288,"completed":74,"skipped":1257,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:10:49.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 6 20:10:53.870: INFO: Successfully updated pod "labelsupdatede6f9e18-ac4d-44ff-b192-b507fa594ae5" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:10:55.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1000" for this suite. • [SLOW TEST:6.728 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":75,"skipped":1258,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:10:55.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 6 20:10:55.991: INFO: Waiting up to 5m0s for pod "downwardapi-volume-08b254cd-9f19-463b-82bb-d91ec6799748" in namespace "projected-2397" to be "Succeeded or Failed" May 6 20:10:56.009: INFO: Pod "downwardapi-volume-08b254cd-9f19-463b-82bb-d91ec6799748": Phase="Pending", Reason="", readiness=false. Elapsed: 18.003763ms May 6 20:10:58.030: INFO: Pod "downwardapi-volume-08b254cd-9f19-463b-82bb-d91ec6799748": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038938872s May 6 20:11:00.036: INFO: Pod "downwardapi-volume-08b254cd-9f19-463b-82bb-d91ec6799748": Phase="Running", Reason="", readiness=true. Elapsed: 4.044400105s May 6 20:11:02.040: INFO: Pod "downwardapi-volume-08b254cd-9f19-463b-82bb-d91ec6799748": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048905056s STEP: Saw pod success May 6 20:11:02.040: INFO: Pod "downwardapi-volume-08b254cd-9f19-463b-82bb-d91ec6799748" satisfied condition "Succeeded or Failed" May 6 20:11:02.044: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-08b254cd-9f19-463b-82bb-d91ec6799748 container client-container: STEP: delete the pod May 6 20:11:02.104: INFO: Waiting for pod downwardapi-volume-08b254cd-9f19-463b-82bb-d91ec6799748 to disappear May 6 20:11:02.110: INFO: Pod downwardapi-volume-08b254cd-9f19-463b-82bb-d91ec6799748 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:11:02.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2397" for this suite. • [SLOW TEST:6.209 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":76,"skipped":1298,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:11:02.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-a43792d2-91b8-443c-a75d-96841bcf45c7 STEP: Creating a pod to test consume secrets May 6 20:11:02.403: INFO: Waiting up to 5m0s for pod "pod-secrets-98c620fe-e88f-4403-ad68-3d14777c088e" in namespace "secrets-1121" to be "Succeeded or Failed" May 6 20:11:02.408: INFO: Pod "pod-secrets-98c620fe-e88f-4403-ad68-3d14777c088e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.806886ms May 6 20:11:04.481: INFO: Pod "pod-secrets-98c620fe-e88f-4403-ad68-3d14777c088e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077020861s May 6 20:11:06.602: INFO: Pod "pod-secrets-98c620fe-e88f-4403-ad68-3d14777c088e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.198650212s STEP: Saw pod success May 6 20:11:06.602: INFO: Pod "pod-secrets-98c620fe-e88f-4403-ad68-3d14777c088e" satisfied condition "Succeeded or Failed" May 6 20:11:06.605: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-98c620fe-e88f-4403-ad68-3d14777c088e container secret-volume-test: STEP: delete the pod May 6 20:11:06.639: INFO: Waiting for pod pod-secrets-98c620fe-e88f-4403-ad68-3d14777c088e to disappear May 6 20:11:06.761: INFO: Pod pod-secrets-98c620fe-e88f-4403-ad68-3d14777c088e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:11:06.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1121" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":77,"skipped":1309,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:11:06.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0506 20:11:07.983906 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 20:11:07.983: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:11:07.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1201" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":288,"completed":78,"skipped":1322,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:11:07.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 20:11:08.203: INFO: The status of Pod test-webserver-0d21aa09-d911-415e-b314-9304c3fb8f50 is Pending, waiting for it to be Running (with Ready = true) May 6 20:11:10.207: INFO: The status of Pod test-webserver-0d21aa09-d911-415e-b314-9304c3fb8f50 is Pending, waiting for it to be Running (with Ready = true) May 6 20:11:12.208: INFO: The status of Pod test-webserver-0d21aa09-d911-415e-b314-9304c3fb8f50 is Running (Ready = false) May 6 20:11:14.208: INFO: The status of Pod test-webserver-0d21aa09-d911-415e-b314-9304c3fb8f50 is Running (Ready = false) May 6 20:11:16.208: INFO: The status of Pod test-webserver-0d21aa09-d911-415e-b314-9304c3fb8f50 is Running (Ready = false) May 6 20:11:18.208: INFO: The status of Pod test-webserver-0d21aa09-d911-415e-b314-9304c3fb8f50 is Running (Ready = false) May 6 20:11:20.207: INFO: The status of Pod test-webserver-0d21aa09-d911-415e-b314-9304c3fb8f50 is Running (Ready = false) May 6 20:11:22.208: INFO: The status of Pod test-webserver-0d21aa09-d911-415e-b314-9304c3fb8f50 is Running (Ready = false) May 6 20:11:24.208: INFO: The status of Pod test-webserver-0d21aa09-d911-415e-b314-9304c3fb8f50 is Running (Ready = false) May 6 20:11:26.207: INFO: The status of Pod test-webserver-0d21aa09-d911-415e-b314-9304c3fb8f50 is Running (Ready = false) May 6 20:11:28.208: INFO: The status of Pod test-webserver-0d21aa09-d911-415e-b314-9304c3fb8f50 is Running (Ready = false) May 6 20:11:30.208: INFO: The status of Pod test-webserver-0d21aa09-d911-415e-b314-9304c3fb8f50 is Running (Ready = true) May 6 20:11:30.211: INFO: Container started at 2020-05-06 20:11:11 +0000 UTC, pod became ready at 2020-05-06 20:11:28 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:11:30.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9503" for this suite. • [SLOW TEST:22.230 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":288,"completed":79,"skipped":1358,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:11:30.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-933.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-933.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-933.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-933.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-933.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-933.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-933.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-933.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-933.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-933.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-933.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 146.1.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.1.146_udp@PTR;check="$$(dig +tcp +noall +answer +search 146.1.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.1.146_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-933.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-933.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-933.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-933.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-933.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-933.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-933.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-933.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-933.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-933.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-933.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 146.1.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.1.146_udp@PTR;check="$$(dig +tcp +noall +answer +search 146.1.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.1.146_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 20:11:36.581: INFO: Unable to read wheezy_udp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:36.584: INFO: Unable to read wheezy_tcp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:36.588: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:36.591: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:36.615: INFO: Unable to read jessie_udp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:36.617: INFO: Unable to read jessie_tcp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:36.619: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:36.622: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:36.636: INFO: Lookups using dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085 failed for: [wheezy_udp@dns-test-service.dns-933.svc.cluster.local wheezy_tcp@dns-test-service.dns-933.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local jessie_udp@dns-test-service.dns-933.svc.cluster.local jessie_tcp@dns-test-service.dns-933.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local] May 6 20:11:41.641: INFO: Unable to read wheezy_udp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:41.645: INFO: Unable to read wheezy_tcp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:41.649: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:41.652: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:41.678: INFO: Unable to read jessie_udp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:41.681: INFO: Unable to read jessie_tcp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:41.684: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:41.691: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:41.707: INFO: Lookups using dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085 failed for: [wheezy_udp@dns-test-service.dns-933.svc.cluster.local wheezy_tcp@dns-test-service.dns-933.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local jessie_udp@dns-test-service.dns-933.svc.cluster.local jessie_tcp@dns-test-service.dns-933.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local] May 6 20:11:46.641: INFO: Unable to read wheezy_udp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:46.645: INFO: Unable to read wheezy_tcp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:46.649: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:46.653: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:46.675: INFO: Unable to read jessie_udp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:46.677: INFO: Unable to read jessie_tcp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:46.680: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:46.682: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:46.700: INFO: Lookups using dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085 failed for: [wheezy_udp@dns-test-service.dns-933.svc.cluster.local wheezy_tcp@dns-test-service.dns-933.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local jessie_udp@dns-test-service.dns-933.svc.cluster.local jessie_tcp@dns-test-service.dns-933.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local] May 6 20:11:51.641: INFO: Unable to read wheezy_udp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:51.644: INFO: Unable to read wheezy_tcp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:51.648: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:51.650: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:51.671: INFO: Unable to read jessie_udp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:51.674: INFO: Unable to read jessie_tcp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:51.676: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:51.679: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:51.696: INFO: Lookups using dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085 failed for: [wheezy_udp@dns-test-service.dns-933.svc.cluster.local wheezy_tcp@dns-test-service.dns-933.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local jessie_udp@dns-test-service.dns-933.svc.cluster.local jessie_tcp@dns-test-service.dns-933.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local] May 6 20:11:56.641: INFO: Unable to read wheezy_udp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:56.645: INFO: Unable to read wheezy_tcp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:56.648: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:56.650: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:56.672: INFO: Unable to read jessie_udp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:56.675: INFO: Unable to read jessie_tcp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:56.677: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:56.680: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:11:56.695: INFO: Lookups using dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085 failed for: [wheezy_udp@dns-test-service.dns-933.svc.cluster.local wheezy_tcp@dns-test-service.dns-933.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local jessie_udp@dns-test-service.dns-933.svc.cluster.local jessie_tcp@dns-test-service.dns-933.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local] May 6 20:12:01.641: INFO: Unable to read wheezy_udp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:12:01.645: INFO: Unable to read wheezy_tcp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:12:01.650: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:12:01.655: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:12:01.673: INFO: Unable to read jessie_udp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:12:01.676: INFO: Unable to read jessie_tcp@dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:12:01.679: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:12:01.681: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local from pod dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085: the server could not find the requested resource (get pods dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085) May 6 20:12:01.695: INFO: Lookups using dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085 failed for: [wheezy_udp@dns-test-service.dns-933.svc.cluster.local wheezy_tcp@dns-test-service.dns-933.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local jessie_udp@dns-test-service.dns-933.svc.cluster.local jessie_tcp@dns-test-service.dns-933.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-933.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-933.svc.cluster.local] May 6 20:12:06.700: INFO: DNS probes using dns-933/dns-test-88a2bebb-1f8a-4368-a8f4-597d35ff6085 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:12:07.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-933" for this suite. • [SLOW TEST:37.240 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":288,"completed":80,"skipped":1370,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:12:07.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 6 20:12:15.923: INFO: 2 pods remaining May 6 20:12:15.923: INFO: 0 pods has nil DeletionTimestamp May 6 20:12:15.923: INFO: May 6 20:12:17.126: INFO: 0 pods remaining May 6 20:12:17.126: INFO: 0 pods has nil DeletionTimestamp May 6 20:12:17.126: INFO: May 6 20:12:17.880: INFO: 0 pods remaining May 6 20:12:17.880: INFO: 0 pods has nil DeletionTimestamp May 6 20:12:17.880: INFO: STEP: Gathering metrics W0506 20:12:18.763076 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 20:12:18.763: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:12:18.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3799" for this suite. • [SLOW TEST:11.400 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":288,"completed":81,"skipped":1385,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:12:18.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 STEP: creating an pod May 6 20:12:19.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-5620 -- logs-generator --log-lines-total 100 --run-duration 20s' May 6 20:12:20.122: INFO: stderr: "" May 6 20:12:20.122: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. May 6 20:12:20.122: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 6 20:12:20.122: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5620" to be "running and ready, or succeeded" May 6 20:12:20.335: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 212.815102ms May 6 20:12:22.340: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217972479s May 6 20:12:24.346: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.224077566s May 6 20:12:24.346: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 6 20:12:24.346: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 6 20:12:24.346: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5620' May 6 20:12:24.457: INFO: stderr: "" May 6 20:12:24.457: INFO: stdout: "I0506 20:12:23.088478 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/q2q 371\nI0506 20:12:23.288696 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/htn 519\nI0506 20:12:23.488720 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/q8h 310\nI0506 20:12:23.688620 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/znt 519\nI0506 20:12:23.888683 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/sxv 526\nI0506 20:12:24.088655 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/fdb 359\nI0506 20:12:24.288651 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/xth 593\n" STEP: limiting log lines May 6 20:12:24.457: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5620 --tail=1' May 6 20:12:24.620: INFO: stderr: "" May 6 20:12:24.620: INFO: stdout: "I0506 20:12:24.488636 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/sl8 400\n" May 6 20:12:24.620: INFO: got output "I0506 20:12:24.488636 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/sl8 400\n" STEP: limiting log bytes May 6 20:12:24.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5620 --limit-bytes=1' May 6 20:12:24.721: INFO: stderr: "" May 6 20:12:24.721: INFO: stdout: "I" May 6 20:12:24.721: INFO: got output "I" STEP: exposing timestamps May 6 20:12:24.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5620 --tail=1 --timestamps' May 6 20:12:24.847: INFO: stderr: "" May 6 20:12:24.847: INFO: stdout: "2020-05-06T20:12:24.6887682Z I0506 20:12:24.688603 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/vx9 365\n" May 6 20:12:24.847: INFO: got output "2020-05-06T20:12:24.6887682Z I0506 20:12:24.688603 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/vx9 365\n" STEP: restricting to a time range May 6 20:12:27.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5620 --since=1s' May 6 20:12:27.463: INFO: stderr: "" May 6 20:12:27.463: INFO: stdout: "I0506 20:12:26.488627 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/4gbp 361\nI0506 20:12:26.688666 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/qxn 555\nI0506 20:12:26.888661 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/mt6 232\nI0506 20:12:27.088655 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/54sp 277\nI0506 20:12:27.288672 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/9rdr 536\n" May 6 20:12:27.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5620 --since=24h' May 6 20:12:27.566: INFO: stderr: "" May 6 20:12:27.566: INFO: stdout: "I0506 20:12:23.088478 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/q2q 371\nI0506 20:12:23.288696 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/htn 519\nI0506 20:12:23.488720 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/q8h 310\nI0506 20:12:23.688620 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/znt 519\nI0506 20:12:23.888683 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/sxv 526\nI0506 20:12:24.088655 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/fdb 359\nI0506 20:12:24.288651 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/xth 593\nI0506 20:12:24.488636 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/sl8 400\nI0506 20:12:24.688603 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/vx9 365\nI0506 20:12:24.888604 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/zk9 554\nI0506 20:12:25.088666 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/mxxs 382\nI0506 20:12:25.288673 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/djtk 427\nI0506 20:12:25.488678 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/mj8p 542\nI0506 20:12:25.688645 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/ndc 534\nI0506 20:12:25.888653 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/wmww 514\nI0506 20:12:26.088669 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/g47 455\nI0506 20:12:26.288625 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/jlj 466\nI0506 20:12:26.488627 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/4gbp 361\nI0506 20:12:26.688666 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/qxn 555\nI0506 20:12:26.888661 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/mt6 232\nI0506 20:12:27.088655 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/54sp 277\nI0506 20:12:27.288672 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/9rdr 536\nI0506 20:12:27.488635 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/nvm 254\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 May 6 20:12:27.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5620' May 6 20:12:35.596: INFO: stderr: "" May 6 20:12:35.596: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:12:35.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5620" for this suite. • [SLOW TEST:17.050 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":288,"completed":82,"skipped":1402,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:12:35.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-3419787d-9699-465b-b011-03ac1ac1476e STEP: Creating secret with name s-test-opt-upd-f30360c4-ab2a-4401-9e42-13ef5b9f8d5c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-3419787d-9699-465b-b011-03ac1ac1476e STEP: Updating secret s-test-opt-upd-f30360c4-ab2a-4401-9e42-13ef5b9f8d5c STEP: Creating secret with name s-test-opt-create-0837bc5a-42d8-4a99-8495-7c464052aac6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:14:07.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1648" for this suite. • [SLOW TEST:91.146 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":83,"skipped":1409,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:14:07.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 6 20:14:07.189: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:14:07.205: INFO: Number of nodes with available pods: 0 May 6 20:14:07.205: INFO: Node latest-worker is running more than one daemon pod May 6 20:14:08.255: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:14:08.259: INFO: Number of nodes with available pods: 0 May 6 20:14:08.259: INFO: Node latest-worker is running more than one daemon pod May 6 20:14:09.366: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:14:09.369: INFO: Number of nodes with available pods: 0 May 6 20:14:09.369: INFO: Node latest-worker is running more than one daemon pod May 6 20:14:10.230: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:14:10.234: INFO: Number of nodes with available pods: 0 May 6 20:14:10.234: INFO: Node latest-worker is running more than one daemon pod May 6 20:14:11.218: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:14:11.230: INFO: Number of nodes with available pods: 1 May 6 20:14:11.230: INFO: Node latest-worker2 is running more than one daemon pod May 6 20:14:12.237: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:14:12.248: INFO: Number of nodes with available pods: 2 May 6 20:14:12.248: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 6 20:14:12.314: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:14:12.378: INFO: Number of nodes with available pods: 1 May 6 20:14:12.378: INFO: Node latest-worker is running more than one daemon pod May 6 20:14:13.393: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:14:13.397: INFO: Number of nodes with available pods: 1 May 6 20:14:13.397: INFO: Node latest-worker is running more than one daemon pod May 6 20:14:14.387: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:14:14.390: INFO: Number of nodes with available pods: 1 May 6 20:14:14.390: INFO: Node latest-worker is running more than one daemon pod May 6 20:14:15.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:14:15.434: INFO: Number of nodes with available pods: 1 May 6 20:14:15.434: INFO: Node latest-worker is running more than one daemon pod May 6 20:14:16.384: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:14:16.387: INFO: Number of nodes with available pods: 1 May 6 20:14:16.387: INFO: Node latest-worker is running more than one daemon pod May 6 20:14:17.383: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:14:17.386: INFO: Number of nodes with available pods: 2 May 6 20:14:17.386: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6761, will wait for the garbage collector to delete the pods May 6 20:14:17.452: INFO: Deleting DaemonSet.extensions daemon-set took: 8.090533ms May 6 20:14:17.852: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.21765ms May 6 20:14:25.685: INFO: Number of nodes with available pods: 0 May 6 20:14:25.685: INFO: Number of running nodes: 0, number of available pods: 0 May 6 20:14:25.687: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6761/daemonsets","resourceVersion":"2087357"},"items":null} May 6 20:14:25.703: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6761/pods","resourceVersion":"2087359"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:14:25.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6761" for this suite. • [SLOW TEST:18.672 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":288,"completed":84,"skipped":1428,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:14:25.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 20:14:26.226: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 20:14:28.334: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724392866, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724392866, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724392866, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724392866, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 20:14:31.436: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:14:33.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1907" for this suite. STEP: Destroying namespace "webhook-1907-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.526 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":288,"completed":85,"skipped":1462,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:14:33.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 6 20:14:33.419: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae88ec5b-ceda-4860-8915-9dc45f3653f7" in namespace "downward-api-1514" to be "Succeeded or Failed" May 6 20:14:33.470: INFO: Pod "downwardapi-volume-ae88ec5b-ceda-4860-8915-9dc45f3653f7": Phase="Pending", Reason="", readiness=false. Elapsed: 51.063363ms May 6 20:14:35.474: INFO: Pod "downwardapi-volume-ae88ec5b-ceda-4860-8915-9dc45f3653f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054721326s May 6 20:14:37.478: INFO: Pod "downwardapi-volume-ae88ec5b-ceda-4860-8915-9dc45f3653f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058180215s STEP: Saw pod success May 6 20:14:37.478: INFO: Pod "downwardapi-volume-ae88ec5b-ceda-4860-8915-9dc45f3653f7" satisfied condition "Succeeded or Failed" May 6 20:14:37.480: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ae88ec5b-ceda-4860-8915-9dc45f3653f7 container client-container: STEP: delete the pod May 6 20:14:37.566: INFO: Waiting for pod downwardapi-volume-ae88ec5b-ceda-4860-8915-9dc45f3653f7 to disappear May 6 20:14:37.590: INFO: Pod downwardapi-volume-ae88ec5b-ceda-4860-8915-9dc45f3653f7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:14:37.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1514" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":86,"skipped":1469,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:14:37.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-8fd3c390-aec5-430e-8c16-e1f399146f7b STEP: Creating a pod to test consume secrets May 6 20:14:37.729: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-80cabf81-93c1-4ee1-9c21-b4d3887bb91b" in namespace "projected-5757" to be "Succeeded or Failed" May 6 20:14:37.743: INFO: Pod "pod-projected-secrets-80cabf81-93c1-4ee1-9c21-b4d3887bb91b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.031817ms May 6 20:14:39.748: INFO: Pod "pod-projected-secrets-80cabf81-93c1-4ee1-9c21-b4d3887bb91b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018527119s May 6 20:14:41.753: INFO: Pod "pod-projected-secrets-80cabf81-93c1-4ee1-9c21-b4d3887bb91b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023505119s STEP: Saw pod success May 6 20:14:41.753: INFO: Pod "pod-projected-secrets-80cabf81-93c1-4ee1-9c21-b4d3887bb91b" satisfied condition "Succeeded or Failed" May 6 20:14:41.756: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-80cabf81-93c1-4ee1-9c21-b4d3887bb91b container projected-secret-volume-test: STEP: delete the pod May 6 20:14:41.778: INFO: Waiting for pod pod-projected-secrets-80cabf81-93c1-4ee1-9c21-b4d3887bb91b to disappear May 6 20:14:41.782: INFO: Pod pod-projected-secrets-80cabf81-93c1-4ee1-9c21-b4d3887bb91b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:14:41.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5757" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":87,"skipped":1527,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:14:41.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-3305f113-42b9-441d-a8df-e2a7b892b876 STEP: Creating configMap with name cm-test-opt-upd-e495222e-d136-40a1-9fcc-12218928fa9a STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-3305f113-42b9-441d-a8df-e2a7b892b876 STEP: Updating configmap cm-test-opt-upd-e495222e-d136-40a1-9fcc-12218928fa9a STEP: Creating configMap with name cm-test-opt-create-6d3d2e3d-37f9-4170-a9cf-d5a5e7a396a0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:15:56.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3009" for this suite. • [SLOW TEST:74.498 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":88,"skipped":1533,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:15:56.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-2091039e-e036-4928-b3e0-1248b49971c7 in namespace container-probe-6594 May 6 20:16:05.124: INFO: Started pod liveness-2091039e-e036-4928-b3e0-1248b49971c7 in namespace container-probe-6594 STEP: checking the pod's current state and verifying that restartCount is present May 6 20:16:05.567: INFO: Initial restart count of pod liveness-2091039e-e036-4928-b3e0-1248b49971c7 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:20:07.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6594" for this suite. • [SLOW TEST:251.520 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":288,"completed":89,"skipped":1551,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:20:07.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 6 20:20:18.504: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 20:20:18.555: INFO: Pod pod-with-poststart-exec-hook still exists May 6 20:20:20.556: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 20:20:20.659: INFO: Pod pod-with-poststart-exec-hook still exists May 6 20:20:22.556: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 20:20:22.580: INFO: Pod pod-with-poststart-exec-hook still exists May 6 20:20:24.556: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 20:20:24.690: INFO: Pod pod-with-poststart-exec-hook still exists May 6 20:20:26.556: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 20:20:26.559: INFO: Pod pod-with-poststart-exec-hook still exists May 6 20:20:28.556: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 20:20:28.575: INFO: Pod pod-with-poststart-exec-hook still exists May 6 20:20:30.556: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 20:20:30.743: INFO: Pod pod-with-poststart-exec-hook still exists May 6 20:20:32.556: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 20:20:32.561: INFO: Pod pod-with-poststart-exec-hook still exists May 6 20:20:34.556: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 20:20:34.995: INFO: Pod pod-with-poststart-exec-hook still exists May 6 20:20:36.556: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 20:20:36.558: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:20:36.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1020" for this suite. • [SLOW TEST:28.902 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":288,"completed":90,"skipped":1562,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:20:36.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all May 6 20:20:36.912: INFO: Waiting up to 5m0s for pod "client-containers-8fcea472-d1a9-4e03-aa51-1f7819acc80c" in namespace "containers-5709" to be "Succeeded or Failed" May 6 20:20:37.030: INFO: Pod "client-containers-8fcea472-d1a9-4e03-aa51-1f7819acc80c": Phase="Pending", Reason="", readiness=false. Elapsed: 118.613233ms May 6 20:20:39.035: INFO: Pod "client-containers-8fcea472-d1a9-4e03-aa51-1f7819acc80c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123370311s May 6 20:20:41.039: INFO: Pod "client-containers-8fcea472-d1a9-4e03-aa51-1f7819acc80c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126976048s May 6 20:20:43.092: INFO: Pod "client-containers-8fcea472-d1a9-4e03-aa51-1f7819acc80c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.179982406s STEP: Saw pod success May 6 20:20:43.092: INFO: Pod "client-containers-8fcea472-d1a9-4e03-aa51-1f7819acc80c" satisfied condition "Succeeded or Failed" May 6 20:20:43.095: INFO: Trying to get logs from node latest-worker pod client-containers-8fcea472-d1a9-4e03-aa51-1f7819acc80c container test-container: STEP: delete the pod May 6 20:20:43.637: INFO: Waiting for pod client-containers-8fcea472-d1a9-4e03-aa51-1f7819acc80c to disappear May 6 20:20:43.797: INFO: Pod client-containers-8fcea472-d1a9-4e03-aa51-1f7819acc80c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:20:43.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5709" for this suite. • [SLOW TEST:7.558 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":288,"completed":91,"skipped":1631,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:20:44.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 20:20:46.372: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 20:20:48.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393246, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393246, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393247, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393246, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 20:20:50.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393246, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393246, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393247, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393246, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 20:20:53.311: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393246, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393246, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393247, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393246, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 20:20:55.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393246, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393246, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393247, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393246, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 20:20:58.601: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:20:59.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-913" for this suite. STEP: Destroying namespace "webhook-913-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.810 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":288,"completed":92,"skipped":1654,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:21:00.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0506 20:21:10.464071 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 20:21:10.464: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:21:10.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1844" for this suite. • [SLOW TEST:10.387 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":288,"completed":93,"skipped":1656,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:21:10.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-dsz8 STEP: Creating a pod to test atomic-volume-subpath May 6 20:21:10.779: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-dsz8" in namespace "subpath-767" to be "Succeeded or Failed" May 6 20:21:10.929: INFO: Pod "pod-subpath-test-projected-dsz8": Phase="Pending", Reason="", readiness=false. Elapsed: 150.013356ms May 6 20:21:12.932: INFO: Pod "pod-subpath-test-projected-dsz8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153799655s May 6 20:21:15.013: INFO: Pod "pod-subpath-test-projected-dsz8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234583341s May 6 20:21:17.016: INFO: Pod "pod-subpath-test-projected-dsz8": Phase="Running", Reason="", readiness=true. Elapsed: 6.237654125s May 6 20:21:19.020: INFO: Pod "pod-subpath-test-projected-dsz8": Phase="Running", Reason="", readiness=true. Elapsed: 8.241035514s May 6 20:21:21.024: INFO: Pod "pod-subpath-test-projected-dsz8": Phase="Running", Reason="", readiness=true. Elapsed: 10.245576905s May 6 20:21:23.027: INFO: Pod "pod-subpath-test-projected-dsz8": Phase="Running", Reason="", readiness=true. Elapsed: 12.248175524s May 6 20:21:25.031: INFO: Pod "pod-subpath-test-projected-dsz8": Phase="Running", Reason="", readiness=true. Elapsed: 14.252122547s May 6 20:21:27.035: INFO: Pod "pod-subpath-test-projected-dsz8": Phase="Running", Reason="", readiness=true. Elapsed: 16.256098569s May 6 20:21:29.057: INFO: Pod "pod-subpath-test-projected-dsz8": Phase="Running", Reason="", readiness=true. Elapsed: 18.278354642s May 6 20:21:31.075: INFO: Pod "pod-subpath-test-projected-dsz8": Phase="Running", Reason="", readiness=true. Elapsed: 20.296061909s May 6 20:21:33.114: INFO: Pod "pod-subpath-test-projected-dsz8": Phase="Running", Reason="", readiness=true. Elapsed: 22.335164889s May 6 20:21:35.118: INFO: Pod "pod-subpath-test-projected-dsz8": Phase="Running", Reason="", readiness=true. Elapsed: 24.339423844s May 6 20:21:37.121: INFO: Pod "pod-subpath-test-projected-dsz8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.342519829s STEP: Saw pod success May 6 20:21:37.121: INFO: Pod "pod-subpath-test-projected-dsz8" satisfied condition "Succeeded or Failed" May 6 20:21:37.123: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-dsz8 container test-container-subpath-projected-dsz8: STEP: delete the pod May 6 20:21:37.288: INFO: Waiting for pod pod-subpath-test-projected-dsz8 to disappear May 6 20:21:37.306: INFO: Pod pod-subpath-test-projected-dsz8 no longer exists STEP: Deleting pod pod-subpath-test-projected-dsz8 May 6 20:21:37.306: INFO: Deleting pod "pod-subpath-test-projected-dsz8" in namespace "subpath-767" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:21:37.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-767" for this suite. • [SLOW TEST:26.844 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":288,"completed":94,"skipped":1674,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:21:37.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:21:37.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9478" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":288,"completed":95,"skipped":1702,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:21:37.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token May 6 20:21:38.482: INFO: created pod pod-service-account-defaultsa May 6 20:21:38.482: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 6 20:21:38.516: INFO: created pod pod-service-account-mountsa May 6 20:21:38.516: INFO: pod pod-service-account-mountsa service account token volume mount: true May 6 20:21:38.599: INFO: created pod pod-service-account-nomountsa May 6 20:21:38.599: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 6 20:21:38.641: INFO: created pod pod-service-account-defaultsa-mountspec May 6 20:21:38.641: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 6 20:21:38.695: INFO: created pod pod-service-account-mountsa-mountspec May 6 20:21:38.695: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 6 20:21:38.749: INFO: created pod pod-service-account-nomountsa-mountspec May 6 20:21:38.749: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 6 20:21:38.779: INFO: created pod pod-service-account-defaultsa-nomountspec May 6 20:21:38.779: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 6 20:21:38.875: INFO: created pod pod-service-account-mountsa-nomountspec May 6 20:21:38.875: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 6 20:21:38.925: INFO: created pod pod-service-account-nomountsa-nomountspec May 6 20:21:38.925: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:21:38.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6856" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":288,"completed":96,"skipped":1741,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:21:39.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9240.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9240.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 20:21:58.480: INFO: DNS probes using dns-9240/dns-test-5a116e32-0135-406c-9701-ad378dad58a4 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:21:58.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9240" for this suite. • [SLOW TEST:20.771 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":288,"completed":97,"skipped":1753,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:21:59.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-6a3f40fc-aee9-4c63-8fea-792bbd86c646 STEP: Creating a pod to test consume configMaps May 6 20:22:03.277: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6bf86b34-0f49-463b-a3a3-d088f6f46c99" in namespace "projected-1932" to be "Succeeded or Failed" May 6 20:22:03.867: INFO: Pod "pod-projected-configmaps-6bf86b34-0f49-463b-a3a3-d088f6f46c99": Phase="Pending", Reason="", readiness=false. Elapsed: 589.517329ms May 6 20:22:05.929: INFO: Pod "pod-projected-configmaps-6bf86b34-0f49-463b-a3a3-d088f6f46c99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.652186847s May 6 20:22:07.997: INFO: Pod "pod-projected-configmaps-6bf86b34-0f49-463b-a3a3-d088f6f46c99": Phase="Running", Reason="", readiness=true. Elapsed: 4.719828529s May 6 20:22:10.001: INFO: Pod "pod-projected-configmaps-6bf86b34-0f49-463b-a3a3-d088f6f46c99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.723697562s STEP: Saw pod success May 6 20:22:10.001: INFO: Pod "pod-projected-configmaps-6bf86b34-0f49-463b-a3a3-d088f6f46c99" satisfied condition "Succeeded or Failed" May 6 20:22:10.004: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-6bf86b34-0f49-463b-a3a3-d088f6f46c99 container projected-configmap-volume-test: STEP: delete the pod May 6 20:22:10.052: INFO: Waiting for pod pod-projected-configmaps-6bf86b34-0f49-463b-a3a3-d088f6f46c99 to disappear May 6 20:22:10.056: INFO: Pod pod-projected-configmaps-6bf86b34-0f49-463b-a3a3-d088f6f46c99 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:22:10.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1932" for this suite. • [SLOW TEST:10.289 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":98,"skipped":1769,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:22:10.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-7761 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7761 to expose endpoints map[] May 6 20:22:10.432: INFO: Get endpoints failed (62.166009ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 6 20:22:11.436: INFO: successfully validated that service multi-endpoint-test in namespace services-7761 exposes endpoints map[] (1.065471344s elapsed) STEP: Creating pod pod1 in namespace services-7761 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7761 to expose endpoints map[pod1:[100]] May 6 20:22:14.553: INFO: successfully validated that service multi-endpoint-test in namespace services-7761 exposes endpoints map[pod1:[100]] (3.111362094s elapsed) STEP: Creating pod pod2 in namespace services-7761 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7761 to expose endpoints map[pod1:[100] pod2:[101]] May 6 20:22:20.067: INFO: Unexpected endpoints: found map[5e19924e-d557-4e4a-a774-5a749a62e84f:[100]], expected map[pod1:[100] pod2:[101]] (5.509346917s elapsed, will retry) May 6 20:22:21.683: INFO: successfully validated that service multi-endpoint-test in namespace services-7761 exposes endpoints map[pod1:[100] pod2:[101]] (7.125816693s elapsed) STEP: Deleting pod pod1 in namespace services-7761 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7761 to expose endpoints map[pod2:[101]] May 6 20:22:22.437: INFO: successfully validated that service multi-endpoint-test in namespace services-7761 exposes endpoints map[pod2:[101]] (292.334655ms elapsed) STEP: Deleting pod pod2 in namespace services-7761 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7761 to expose endpoints map[] May 6 20:22:22.704: INFO: successfully validated that service multi-endpoint-test in namespace services-7761 exposes endpoints map[] (44.580654ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:22:23.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7761" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:13.852 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":288,"completed":99,"skipped":1788,"failed":0} SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:22:23.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 6 20:22:24.918: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:22:35.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9566" for this suite. • [SLOW TEST:11.800 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":288,"completed":100,"skipped":1792,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:22:35.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-3397/configmap-test-671f5eb4-a0f4-4489-9e97-18bc002d52dd STEP: Creating a pod to test consume configMaps May 6 20:22:35.947: INFO: Waiting up to 5m0s for pod "pod-configmaps-6ff858cb-e20d-4bb9-8df8-02f40eb47479" in namespace "configmap-3397" to be "Succeeded or Failed" May 6 20:22:35.961: INFO: Pod "pod-configmaps-6ff858cb-e20d-4bb9-8df8-02f40eb47479": Phase="Pending", Reason="", readiness=false. Elapsed: 13.612639ms May 6 20:22:37.964: INFO: Pod "pod-configmaps-6ff858cb-e20d-4bb9-8df8-02f40eb47479": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017016391s May 6 20:22:39.986: INFO: Pod "pod-configmaps-6ff858cb-e20d-4bb9-8df8-02f40eb47479": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038217808s May 6 20:22:41.990: INFO: Pod "pod-configmaps-6ff858cb-e20d-4bb9-8df8-02f40eb47479": Phase="Running", Reason="", readiness=true. Elapsed: 6.042103865s May 6 20:22:43.994: INFO: Pod "pod-configmaps-6ff858cb-e20d-4bb9-8df8-02f40eb47479": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046346761s STEP: Saw pod success May 6 20:22:43.994: INFO: Pod "pod-configmaps-6ff858cb-e20d-4bb9-8df8-02f40eb47479" satisfied condition "Succeeded or Failed" May 6 20:22:43.996: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-6ff858cb-e20d-4bb9-8df8-02f40eb47479 container env-test: STEP: delete the pod May 6 20:22:44.044: INFO: Waiting for pod pod-configmaps-6ff858cb-e20d-4bb9-8df8-02f40eb47479 to disappear May 6 20:22:44.060: INFO: Pod pod-configmaps-6ff858cb-e20d-4bb9-8df8-02f40eb47479 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:22:44.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3397" for this suite. • [SLOW TEST:8.334 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":288,"completed":101,"skipped":1825,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:22:44.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 6 20:22:44.709: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31c2b213-8d05-45ab-abba-1474ec617039" in namespace "projected-1029" to be "Succeeded or Failed" May 6 20:22:44.725: INFO: Pod "downwardapi-volume-31c2b213-8d05-45ab-abba-1474ec617039": Phase="Pending", Reason="", readiness=false. Elapsed: 15.566514ms May 6 20:22:46.729: INFO: Pod "downwardapi-volume-31c2b213-8d05-45ab-abba-1474ec617039": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019735174s May 6 20:22:48.938: INFO: Pod "downwardapi-volume-31c2b213-8d05-45ab-abba-1474ec617039": Phase="Running", Reason="", readiness=true. Elapsed: 4.228941306s May 6 20:22:50.943: INFO: Pod "downwardapi-volume-31c2b213-8d05-45ab-abba-1474ec617039": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.233921157s STEP: Saw pod success May 6 20:22:50.943: INFO: Pod "downwardapi-volume-31c2b213-8d05-45ab-abba-1474ec617039" satisfied condition "Succeeded or Failed" May 6 20:22:50.946: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-31c2b213-8d05-45ab-abba-1474ec617039 container client-container: STEP: delete the pod May 6 20:22:51.036: INFO: Waiting for pod downwardapi-volume-31c2b213-8d05-45ab-abba-1474ec617039 to disappear May 6 20:22:51.383: INFO: Pod downwardapi-volume-31c2b213-8d05-45ab-abba-1474ec617039 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:22:51.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1029" for this suite. • [SLOW TEST:7.295 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":102,"skipped":1832,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:22:51.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:22:51.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-6874" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":288,"completed":103,"skipped":1855,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:22:51.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 6 20:22:51.951: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7430' May 6 20:22:58.722: INFO: stderr: "" May 6 20:22:58.723: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 6 20:23:03.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7430 -o json' May 6 20:23:04.236: INFO: stderr: "" May 6 20:23:04.236: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-06T20:22:58Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-06T20:22:58Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.190\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-06T20:23:02Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7430\",\n \"resourceVersion\": \"2089543\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7430/pods/e2e-test-httpd-pod\",\n \"uid\": \"481d594b-aa17-4549-b57e-8d5dbecbb98c\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-xpmzk\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-xpmzk\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-xpmzk\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-06T20:22:58Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-06T20:23:02Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-06T20:23:02Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-06T20:22:58Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://8bf7f9e35f56f13a54f3627822daae242540a21442dce9560779831e43a63b5c\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-06T20:23:01Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.190\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.190\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-06T20:22:58Z\"\n }\n}\n" STEP: replace the image in the pod May 6 20:23:04.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7430' May 6 20:23:05.157: INFO: stderr: "" May 6 20:23:05.157: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 May 6 20:23:05.438: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7430' May 6 20:23:15.284: INFO: stderr: "" May 6 20:23:15.284: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:23:15.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7430" for this suite. • [SLOW TEST:23.673 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":288,"completed":104,"skipped":1866,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:23:15.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 20:23:15.519: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 6 20:23:18.511: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3167 create -f -' May 6 20:23:25.076: INFO: stderr: "" May 6 20:23:25.076: INFO: stdout: "e2e-test-crd-publish-openapi-1326-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 6 20:23:25.076: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3167 delete e2e-test-crd-publish-openapi-1326-crds test-foo' May 6 20:23:25.195: INFO: stderr: "" May 6 20:23:25.195: INFO: stdout: "e2e-test-crd-publish-openapi-1326-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 6 20:23:25.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3167 apply -f -' May 6 20:23:25.511: INFO: stderr: "" May 6 20:23:25.511: INFO: stdout: "e2e-test-crd-publish-openapi-1326-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 6 20:23:25.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3167 delete e2e-test-crd-publish-openapi-1326-crds test-foo' May 6 20:23:25.680: INFO: stderr: "" May 6 20:23:25.680: INFO: stdout: "e2e-test-crd-publish-openapi-1326-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 6 20:23:25.681: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3167 create -f -' May 6 20:23:26.068: INFO: rc: 1 May 6 20:23:26.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3167 apply -f -' May 6 20:23:27.429: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 6 20:23:27.429: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3167 create -f -' May 6 20:23:27.968: INFO: rc: 1 May 6 20:23:27.968: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3167 apply -f -' May 6 20:23:28.462: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 6 20:23:28.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1326-crds' May 6 20:23:29.081: INFO: stderr: "" May 6 20:23:29.081: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1326-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 6 20:23:29.082: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1326-crds.metadata' May 6 20:23:29.420: INFO: stderr: "" May 6 20:23:29.420: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1326-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 6 20:23:29.420: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1326-crds.spec' May 6 20:23:29.979: INFO: stderr: "" May 6 20:23:29.979: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1326-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 6 20:23:29.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1326-crds.spec.bars' May 6 20:23:30.257: INFO: stderr: "" May 6 20:23:30.257: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1326-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 6 20:23:30.258: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1326-crds.spec.bars2' May 6 20:23:30.509: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:23:32.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3167" for this suite. • [SLOW TEST:17.823 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":288,"completed":105,"skipped":1884,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:23:33.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 6 20:23:33.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8472' May 6 20:23:33.967: INFO: stderr: "" May 6 20:23:33.967: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 20:23:33.967: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8472' May 6 20:23:34.114: INFO: stderr: "" May 6 20:23:34.114: INFO: stdout: "update-demo-nautilus-bzwnl update-demo-nautilus-mhl5p " May 6 20:23:34.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bzwnl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8472' May 6 20:23:34.395: INFO: stderr: "" May 6 20:23:34.395: INFO: stdout: "" May 6 20:23:34.395: INFO: update-demo-nautilus-bzwnl is created but not running May 6 20:23:39.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8472' May 6 20:23:39.497: INFO: stderr: "" May 6 20:23:39.497: INFO: stdout: "update-demo-nautilus-bzwnl update-demo-nautilus-mhl5p " May 6 20:23:39.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bzwnl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8472' May 6 20:23:39.647: INFO: stderr: "" May 6 20:23:39.647: INFO: stdout: "" May 6 20:23:39.647: INFO: update-demo-nautilus-bzwnl is created but not running May 6 20:23:44.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8472' May 6 20:23:44.761: INFO: stderr: "" May 6 20:23:44.761: INFO: stdout: "update-demo-nautilus-bzwnl update-demo-nautilus-mhl5p " May 6 20:23:44.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bzwnl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8472' May 6 20:23:44.851: INFO: stderr: "" May 6 20:23:44.851: INFO: stdout: "true" May 6 20:23:44.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bzwnl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8472' May 6 20:23:45.075: INFO: stderr: "" May 6 20:23:45.075: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 20:23:45.075: INFO: validating pod update-demo-nautilus-bzwnl May 6 20:23:45.080: INFO: got data: { "image": "nautilus.jpg" } May 6 20:23:45.080: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 20:23:45.080: INFO: update-demo-nautilus-bzwnl is verified up and running May 6 20:23:45.080: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mhl5p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8472' May 6 20:23:45.187: INFO: stderr: "" May 6 20:23:45.187: INFO: stdout: "true" May 6 20:23:45.187: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mhl5p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8472' May 6 20:23:45.287: INFO: stderr: "" May 6 20:23:45.287: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 20:23:45.287: INFO: validating pod update-demo-nautilus-mhl5p May 6 20:23:45.291: INFO: got data: { "image": "nautilus.jpg" } May 6 20:23:45.291: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 20:23:45.291: INFO: update-demo-nautilus-mhl5p is verified up and running STEP: using delete to clean up resources May 6 20:23:45.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8472' May 6 20:23:45.605: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 20:23:45.605: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 6 20:23:45.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8472' May 6 20:23:46.769: INFO: stderr: "No resources found in kubectl-8472 namespace.\n" May 6 20:23:46.769: INFO: stdout: "" May 6 20:23:46.769: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8472 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 20:23:46.998: INFO: stderr: "" May 6 20:23:46.998: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:23:46.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8472" for this suite. • [SLOW TEST:14.217 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":288,"completed":106,"skipped":1887,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:23:47.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-a57d7bee-29f9-485a-b244-307f896a0eeb STEP: Creating a pod to test consume configMaps May 6 20:23:49.414: INFO: Waiting up to 5m0s for pod "pod-configmaps-f13ecb2a-f4b2-4928-b72e-c89ffcacfaf3" in namespace "configmap-1106" to be "Succeeded or Failed" May 6 20:23:49.678: INFO: Pod "pod-configmaps-f13ecb2a-f4b2-4928-b72e-c89ffcacfaf3": Phase="Pending", Reason="", readiness=false. Elapsed: 264.251885ms May 6 20:23:51.900: INFO: Pod "pod-configmaps-f13ecb2a-f4b2-4928-b72e-c89ffcacfaf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.485732863s May 6 20:23:54.091: INFO: Pod "pod-configmaps-f13ecb2a-f4b2-4928-b72e-c89ffcacfaf3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.67697168s May 6 20:23:56.101: INFO: Pod "pod-configmaps-f13ecb2a-f4b2-4928-b72e-c89ffcacfaf3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.687176836s May 6 20:23:58.181: INFO: Pod "pod-configmaps-f13ecb2a-f4b2-4928-b72e-c89ffcacfaf3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.767127993s May 6 20:24:00.355: INFO: Pod "pod-configmaps-f13ecb2a-f4b2-4928-b72e-c89ffcacfaf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.941125494s STEP: Saw pod success May 6 20:24:00.355: INFO: Pod "pod-configmaps-f13ecb2a-f4b2-4928-b72e-c89ffcacfaf3" satisfied condition "Succeeded or Failed" May 6 20:24:00.359: INFO: Trying to get logs from node latest-worker pod pod-configmaps-f13ecb2a-f4b2-4928-b72e-c89ffcacfaf3 container configmap-volume-test: STEP: delete the pod May 6 20:24:01.362: INFO: Waiting for pod pod-configmaps-f13ecb2a-f4b2-4928-b72e-c89ffcacfaf3 to disappear May 6 20:24:01.398: INFO: Pod pod-configmaps-f13ecb2a-f4b2-4928-b72e-c89ffcacfaf3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:24:01.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1106" for this suite. • [SLOW TEST:14.047 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":107,"skipped":1908,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:24:01.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components May 6 20:24:02.442: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 6 20:24:02.443: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8578' May 6 20:24:03.478: INFO: stderr: "" May 6 20:24:03.478: INFO: stdout: "service/agnhost-slave created\n" May 6 20:24:03.479: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 6 20:24:03.479: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8578' May 6 20:24:04.893: INFO: stderr: "" May 6 20:24:04.893: INFO: stdout: "service/agnhost-master created\n" May 6 20:24:04.894: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 6 20:24:04.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8578' May 6 20:24:05.881: INFO: stderr: "" May 6 20:24:05.881: INFO: stdout: "service/frontend created\n" May 6 20:24:05.882: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 6 20:24:05.882: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8578' May 6 20:24:06.767: INFO: stderr: "" May 6 20:24:06.768: INFO: stdout: "deployment.apps/frontend created\n" May 6 20:24:06.768: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 6 20:24:06.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8578' May 6 20:24:08.183: INFO: stderr: "" May 6 20:24:08.183: INFO: stdout: "deployment.apps/agnhost-master created\n" May 6 20:24:08.183: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 6 20:24:08.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8578' May 6 20:24:09.867: INFO: stderr: "" May 6 20:24:09.867: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 6 20:24:09.867: INFO: Waiting for all frontend pods to be Running. May 6 20:24:24.918: INFO: Waiting for frontend to serve content. May 6 20:24:24.927: INFO: Trying to add a new entry to the guestbook. May 6 20:24:24.967: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 6 20:24:25.003: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8578' May 6 20:24:25.341: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 20:24:25.342: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 6 20:24:25.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8578' May 6 20:24:25.776: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 20:24:25.776: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 6 20:24:25.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8578' May 6 20:24:26.253: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 20:24:26.253: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 6 20:24:26.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8578' May 6 20:24:26.409: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 20:24:26.409: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 6 20:24:26.409: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8578' May 6 20:24:26.810: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 20:24:26.810: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 6 20:24:26.810: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8578' May 6 20:24:28.240: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 20:24:28.240: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:24:28.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8578" for this suite. • [SLOW TEST:27.747 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":288,"completed":108,"skipped":1929,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:24:29.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 20:24:31.002: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-dbff2243-d7ab-4126-bca8-becfc0bfd1b0" in namespace "security-context-test-4644" to be "Succeeded or Failed" May 6 20:24:31.690: INFO: Pod "alpine-nnp-false-dbff2243-d7ab-4126-bca8-becfc0bfd1b0": Phase="Pending", Reason="", readiness=false. Elapsed: 688.184728ms May 6 20:24:33.738: INFO: Pod "alpine-nnp-false-dbff2243-d7ab-4126-bca8-becfc0bfd1b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.735776477s May 6 20:24:35.746: INFO: Pod "alpine-nnp-false-dbff2243-d7ab-4126-bca8-becfc0bfd1b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.74450053s May 6 20:24:37.791: INFO: Pod "alpine-nnp-false-dbff2243-d7ab-4126-bca8-becfc0bfd1b0": Phase="Running", Reason="", readiness=true. Elapsed: 6.789473638s May 6 20:24:39.794: INFO: Pod "alpine-nnp-false-dbff2243-d7ab-4126-bca8-becfc0bfd1b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.79251855s May 6 20:24:39.794: INFO: Pod "alpine-nnp-false-dbff2243-d7ab-4126-bca8-becfc0bfd1b0" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:24:39.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4644" for this suite. • [SLOW TEST:10.674 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":109,"skipped":1962,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:24:39.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:24:45.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5076" for this suite. • [SLOW TEST:6.193 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":288,"completed":110,"skipped":1970,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:24:46.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 6 20:24:46.169: INFO: Waiting up to 5m0s for pod "pod-37613366-e73b-46e3-b9d7-ae163ab87a54" in namespace "emptydir-1191" to be "Succeeded or Failed" May 6 20:24:46.219: INFO: Pod "pod-37613366-e73b-46e3-b9d7-ae163ab87a54": Phase="Pending", Reason="", readiness=false. Elapsed: 49.83508ms May 6 20:24:48.226: INFO: Pod "pod-37613366-e73b-46e3-b9d7-ae163ab87a54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056878417s May 6 20:24:50.230: INFO: Pod "pod-37613366-e73b-46e3-b9d7-ae163ab87a54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060962417s STEP: Saw pod success May 6 20:24:50.230: INFO: Pod "pod-37613366-e73b-46e3-b9d7-ae163ab87a54" satisfied condition "Succeeded or Failed" May 6 20:24:50.233: INFO: Trying to get logs from node latest-worker2 pod pod-37613366-e73b-46e3-b9d7-ae163ab87a54 container test-container: STEP: delete the pod May 6 20:24:50.360: INFO: Waiting for pod pod-37613366-e73b-46e3-b9d7-ae163ab87a54 to disappear May 6 20:24:50.366: INFO: Pod pod-37613366-e73b-46e3-b9d7-ae163ab87a54 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:24:50.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1191" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":111,"skipped":1972,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:24:50.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 6 20:24:50.624: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9414 /api/v1/namespaces/watch-9414/configmaps/e2e-watch-test-watch-closed d100cd84-c13a-48cf-a872-64d757d06be1 2090207 0 2020-05-06 20:24:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-06 20:24:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 6 20:24:50.624: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9414 /api/v1/namespaces/watch-9414/configmaps/e2e-watch-test-watch-closed d100cd84-c13a-48cf-a872-64d757d06be1 2090209 0 2020-05-06 20:24:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-06 20:24:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 6 20:24:50.639: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9414 /api/v1/namespaces/watch-9414/configmaps/e2e-watch-test-watch-closed d100cd84-c13a-48cf-a872-64d757d06be1 2090210 0 2020-05-06 20:24:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-06 20:24:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 6 20:24:50.639: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9414 /api/v1/namespaces/watch-9414/configmaps/e2e-watch-test-watch-closed d100cd84-c13a-48cf-a872-64d757d06be1 2090211 0 2020-05-06 20:24:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-06 20:24:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:24:50.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9414" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":288,"completed":112,"skipped":2024,"failed":0} SSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:24:50.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8422 May 6 20:24:54.753: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8422 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 6 20:24:54.968: INFO: stderr: "I0506 20:24:54.874330 1989 log.go:172] (0xc000ac7e40) (0xc000687d60) Create stream\nI0506 20:24:54.874399 1989 log.go:172] (0xc000ac7e40) (0xc000687d60) Stream added, broadcasting: 1\nI0506 20:24:54.876992 1989 log.go:172] (0xc000ac7e40) Reply frame received for 1\nI0506 20:24:54.877029 1989 log.go:172] (0xc000ac7e40) (0xc000b58780) Create stream\nI0506 20:24:54.877044 1989 log.go:172] (0xc000ac7e40) (0xc000b58780) Stream added, broadcasting: 3\nI0506 20:24:54.878186 1989 log.go:172] (0xc000ac7e40) Reply frame received for 3\nI0506 20:24:54.878218 1989 log.go:172] (0xc000ac7e40) (0xc0006aa640) Create stream\nI0506 20:24:54.878224 1989 log.go:172] (0xc000ac7e40) (0xc0006aa640) Stream added, broadcasting: 5\nI0506 20:24:54.879516 1989 log.go:172] (0xc000ac7e40) Reply frame received for 5\nI0506 20:24:54.956006 1989 log.go:172] (0xc000ac7e40) Data frame received for 5\nI0506 20:24:54.956028 1989 log.go:172] (0xc0006aa640) (5) Data frame handling\nI0506 20:24:54.956041 1989 log.go:172] (0xc0006aa640) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0506 20:24:54.962478 1989 log.go:172] (0xc000ac7e40) Data frame received for 3\nI0506 20:24:54.962497 1989 log.go:172] (0xc000b58780) (3) Data frame handling\nI0506 20:24:54.962508 1989 log.go:172] (0xc000b58780) (3) Data frame sent\nI0506 20:24:54.962983 1989 log.go:172] (0xc000ac7e40) Data frame received for 3\nI0506 20:24:54.963001 1989 log.go:172] (0xc000b58780) (3) Data frame handling\nI0506 20:24:54.963121 1989 log.go:172] (0xc000ac7e40) Data frame received for 5\nI0506 20:24:54.963140 1989 log.go:172] (0xc0006aa640) (5) Data frame handling\nI0506 20:24:54.964681 1989 log.go:172] (0xc000ac7e40) Data frame received for 1\nI0506 20:24:54.964696 1989 log.go:172] (0xc000687d60) (1) Data frame handling\nI0506 20:24:54.964706 1989 log.go:172] (0xc000687d60) (1) Data frame sent\nI0506 20:24:54.964726 1989 log.go:172] (0xc000ac7e40) (0xc000687d60) Stream removed, broadcasting: 1\nI0506 20:24:54.964773 1989 log.go:172] (0xc000ac7e40) Go away received\nI0506 20:24:54.965097 1989 log.go:172] (0xc000ac7e40) (0xc000687d60) Stream removed, broadcasting: 1\nI0506 20:24:54.965256 1989 log.go:172] (0xc000ac7e40) (0xc000b58780) Stream removed, broadcasting: 3\nI0506 20:24:54.965279 1989 log.go:172] (0xc000ac7e40) (0xc0006aa640) Stream removed, broadcasting: 5\n" May 6 20:24:54.969: INFO: stdout: "iptables" May 6 20:24:54.969: INFO: proxyMode: iptables May 6 20:24:55.075: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 6 20:24:55.103: INFO: Pod kube-proxy-mode-detector still exists May 6 20:24:57.104: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 6 20:24:57.107: INFO: Pod kube-proxy-mode-detector still exists May 6 20:24:59.103: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 6 20:24:59.107: INFO: Pod kube-proxy-mode-detector still exists May 6 20:25:01.103: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 6 20:25:01.106: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-8422 STEP: creating replication controller affinity-nodeport-timeout in namespace services-8422 I0506 20:25:01.524204 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-8422, replica count: 3 I0506 20:25:04.574589 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 20:25:07.574802 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 20:25:10.575030 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 20:25:13.575297 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 20:25:13.585: INFO: Creating new exec pod May 6 20:25:18.656: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8422 execpod-affinitysb5mp -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 6 20:25:18.861: INFO: stderr: "I0506 20:25:18.785094 2010 log.go:172] (0xc000aea0b0) (0xc00099a320) Create stream\nI0506 20:25:18.785284 2010 log.go:172] (0xc000aea0b0) (0xc00099a320) Stream added, broadcasting: 1\nI0506 20:25:18.787648 2010 log.go:172] (0xc000aea0b0) Reply frame received for 1\nI0506 20:25:18.787684 2010 log.go:172] (0xc000aea0b0) (0xc00099abe0) Create stream\nI0506 20:25:18.787698 2010 log.go:172] (0xc000aea0b0) (0xc00099abe0) Stream added, broadcasting: 3\nI0506 20:25:18.788445 2010 log.go:172] (0xc000aea0b0) Reply frame received for 3\nI0506 20:25:18.788472 2010 log.go:172] (0xc000aea0b0) (0xc0009ad2c0) Create stream\nI0506 20:25:18.788480 2010 log.go:172] (0xc000aea0b0) (0xc0009ad2c0) Stream added, broadcasting: 5\nI0506 20:25:18.789337 2010 log.go:172] (0xc000aea0b0) Reply frame received for 5\nI0506 20:25:18.852849 2010 log.go:172] (0xc000aea0b0) Data frame received for 5\nI0506 20:25:18.852880 2010 log.go:172] (0xc0009ad2c0) (5) Data frame handling\nI0506 20:25:18.852900 2010 log.go:172] (0xc0009ad2c0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0506 20:25:18.852968 2010 log.go:172] (0xc000aea0b0) Data frame received for 5\nI0506 20:25:18.853008 2010 log.go:172] (0xc0009ad2c0) (5) Data frame handling\nI0506 20:25:18.853047 2010 log.go:172] (0xc0009ad2c0) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0506 20:25:18.853737 2010 log.go:172] (0xc000aea0b0) Data frame received for 3\nI0506 20:25:18.853754 2010 log.go:172] (0xc00099abe0) (3) Data frame handling\nI0506 20:25:18.853853 2010 log.go:172] (0xc000aea0b0) Data frame received for 5\nI0506 20:25:18.853875 2010 log.go:172] (0xc0009ad2c0) (5) Data frame handling\nI0506 20:25:18.855785 2010 log.go:172] (0xc000aea0b0) Data frame received for 1\nI0506 20:25:18.855803 2010 log.go:172] (0xc00099a320) (1) Data frame handling\nI0506 20:25:18.855829 2010 log.go:172] (0xc00099a320) (1) Data frame sent\nI0506 20:25:18.855854 2010 log.go:172] (0xc000aea0b0) (0xc00099a320) Stream removed, broadcasting: 1\nI0506 20:25:18.855875 2010 log.go:172] (0xc000aea0b0) Go away received\nI0506 20:25:18.856275 2010 log.go:172] (0xc000aea0b0) (0xc00099a320) Stream removed, broadcasting: 1\nI0506 20:25:18.856316 2010 log.go:172] (0xc000aea0b0) (0xc00099abe0) Stream removed, broadcasting: 3\nI0506 20:25:18.856334 2010 log.go:172] (0xc000aea0b0) (0xc0009ad2c0) Stream removed, broadcasting: 5\n" May 6 20:25:18.861: INFO: stdout: "" May 6 20:25:18.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8422 execpod-affinitysb5mp -- /bin/sh -x -c nc -zv -t -w 2 10.99.203.30 80' May 6 20:25:19.071: INFO: stderr: "I0506 20:25:18.985347 2030 log.go:172] (0xc00003a420) (0xc000557900) Create stream\nI0506 20:25:18.985426 2030 log.go:172] (0xc00003a420) (0xc000557900) Stream added, broadcasting: 1\nI0506 20:25:18.988923 2030 log.go:172] (0xc00003a420) Reply frame received for 1\nI0506 20:25:18.988989 2030 log.go:172] (0xc00003a420) (0xc00053e1e0) Create stream\nI0506 20:25:18.989011 2030 log.go:172] (0xc00003a420) (0xc00053e1e0) Stream added, broadcasting: 3\nI0506 20:25:18.990267 2030 log.go:172] (0xc00003a420) Reply frame received for 3\nI0506 20:25:18.990293 2030 log.go:172] (0xc00003a420) (0xc000557b80) Create stream\nI0506 20:25:18.990315 2030 log.go:172] (0xc00003a420) (0xc000557b80) Stream added, broadcasting: 5\nI0506 20:25:18.991248 2030 log.go:172] (0xc00003a420) Reply frame received for 5\nI0506 20:25:19.065335 2030 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 20:25:19.065363 2030 log.go:172] (0xc00053e1e0) (3) Data frame handling\nI0506 20:25:19.065391 2030 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 20:25:19.065399 2030 log.go:172] (0xc000557b80) (5) Data frame handling\nI0506 20:25:19.065407 2030 log.go:172] (0xc000557b80) (5) Data frame sent\nI0506 20:25:19.065414 2030 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 20:25:19.065423 2030 log.go:172] (0xc000557b80) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.203.30 80\nConnection to 10.99.203.30 80 port [tcp/http] succeeded!\nI0506 20:25:19.066593 2030 log.go:172] (0xc00003a420) Data frame received for 1\nI0506 20:25:19.066614 2030 log.go:172] (0xc000557900) (1) Data frame handling\nI0506 20:25:19.066629 2030 log.go:172] (0xc000557900) (1) Data frame sent\nI0506 20:25:19.066649 2030 log.go:172] (0xc00003a420) (0xc000557900) Stream removed, broadcasting: 1\nI0506 20:25:19.066685 2030 log.go:172] (0xc00003a420) Go away received\nI0506 20:25:19.066992 2030 log.go:172] (0xc00003a420) (0xc000557900) Stream removed, broadcasting: 1\nI0506 20:25:19.067011 2030 log.go:172] (0xc00003a420) (0xc00053e1e0) Stream removed, broadcasting: 3\nI0506 20:25:19.067019 2030 log.go:172] (0xc00003a420) (0xc000557b80) Stream removed, broadcasting: 5\n" May 6 20:25:19.071: INFO: stdout: "" May 6 20:25:19.071: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8422 execpod-affinitysb5mp -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31881' May 6 20:25:19.245: INFO: stderr: "I0506 20:25:19.187371 2050 log.go:172] (0xc000ad0210) (0xc0004fa8c0) Create stream\nI0506 20:25:19.187416 2050 log.go:172] (0xc000ad0210) (0xc0004fa8c0) Stream added, broadcasting: 1\nI0506 20:25:19.188964 2050 log.go:172] (0xc000ad0210) Reply frame received for 1\nI0506 20:25:19.189017 2050 log.go:172] (0xc000ad0210) (0xc0004b8640) Create stream\nI0506 20:25:19.189039 2050 log.go:172] (0xc000ad0210) (0xc0004b8640) Stream added, broadcasting: 3\nI0506 20:25:19.190461 2050 log.go:172] (0xc000ad0210) Reply frame received for 3\nI0506 20:25:19.190490 2050 log.go:172] (0xc000ad0210) (0xc00014f5e0) Create stream\nI0506 20:25:19.190498 2050 log.go:172] (0xc000ad0210) (0xc00014f5e0) Stream added, broadcasting: 5\nI0506 20:25:19.191375 2050 log.go:172] (0xc000ad0210) Reply frame received for 5\nI0506 20:25:19.239609 2050 log.go:172] (0xc000ad0210) Data frame received for 5\nI0506 20:25:19.239636 2050 log.go:172] (0xc00014f5e0) (5) Data frame handling\nI0506 20:25:19.239665 2050 log.go:172] (0xc00014f5e0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 31881\nConnection to 172.17.0.13 31881 port [tcp/31881] succeeded!\nI0506 20:25:19.239801 2050 log.go:172] (0xc000ad0210) Data frame received for 3\nI0506 20:25:19.239840 2050 log.go:172] (0xc0004b8640) (3) Data frame handling\nI0506 20:25:19.239861 2050 log.go:172] (0xc000ad0210) Data frame received for 5\nI0506 20:25:19.239880 2050 log.go:172] (0xc00014f5e0) (5) Data frame handling\nI0506 20:25:19.241256 2050 log.go:172] (0xc000ad0210) Data frame received for 1\nI0506 20:25:19.241304 2050 log.go:172] (0xc0004fa8c0) (1) Data frame handling\nI0506 20:25:19.241330 2050 log.go:172] (0xc0004fa8c0) (1) Data frame sent\nI0506 20:25:19.241349 2050 log.go:172] (0xc000ad0210) (0xc0004fa8c0) Stream removed, broadcasting: 1\nI0506 20:25:19.241378 2050 log.go:172] (0xc000ad0210) Go away received\nI0506 20:25:19.241607 2050 log.go:172] (0xc000ad0210) (0xc0004fa8c0) Stream removed, broadcasting: 1\nI0506 20:25:19.241621 2050 log.go:172] (0xc000ad0210) (0xc0004b8640) Stream removed, broadcasting: 3\nI0506 20:25:19.241632 2050 log.go:172] (0xc000ad0210) (0xc00014f5e0) Stream removed, broadcasting: 5\n" May 6 20:25:19.245: INFO: stdout: "" May 6 20:25:19.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8422 execpod-affinitysb5mp -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31881' May 6 20:25:19.433: INFO: stderr: "I0506 20:25:19.368038 2070 log.go:172] (0xc0009b1600) (0xc0006dd4a0) Create stream\nI0506 20:25:19.368089 2070 log.go:172] (0xc0009b1600) (0xc0006dd4a0) Stream added, broadcasting: 1\nI0506 20:25:19.370265 2070 log.go:172] (0xc0009b1600) Reply frame received for 1\nI0506 20:25:19.370306 2070 log.go:172] (0xc0009b1600) (0xc0006eae60) Create stream\nI0506 20:25:19.370330 2070 log.go:172] (0xc0009b1600) (0xc0006eae60) Stream added, broadcasting: 3\nI0506 20:25:19.371069 2070 log.go:172] (0xc0009b1600) Reply frame received for 3\nI0506 20:25:19.371089 2070 log.go:172] (0xc0009b1600) (0xc0006ddea0) Create stream\nI0506 20:25:19.371098 2070 log.go:172] (0xc0009b1600) (0xc0006ddea0) Stream added, broadcasting: 5\nI0506 20:25:19.371762 2070 log.go:172] (0xc0009b1600) Reply frame received for 5\nI0506 20:25:19.428095 2070 log.go:172] (0xc0009b1600) Data frame received for 3\nI0506 20:25:19.428124 2070 log.go:172] (0xc0006eae60) (3) Data frame handling\nI0506 20:25:19.428152 2070 log.go:172] (0xc0009b1600) Data frame received for 5\nI0506 20:25:19.428185 2070 log.go:172] (0xc0006ddea0) (5) Data frame handling\nI0506 20:25:19.428210 2070 log.go:172] (0xc0006ddea0) (5) Data frame sent\nI0506 20:25:19.428224 2070 log.go:172] (0xc0009b1600) Data frame received for 5\nI0506 20:25:19.428237 2070 log.go:172] (0xc0006ddea0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31881\nConnection to 172.17.0.12 31881 port [tcp/31881] succeeded!\nI0506 20:25:19.429336 2070 log.go:172] (0xc0009b1600) Data frame received for 1\nI0506 20:25:19.429414 2070 log.go:172] (0xc0006dd4a0) (1) Data frame handling\nI0506 20:25:19.429455 2070 log.go:172] (0xc0006dd4a0) (1) Data frame sent\nI0506 20:25:19.429476 2070 log.go:172] (0xc0009b1600) (0xc0006dd4a0) Stream removed, broadcasting: 1\nI0506 20:25:19.429567 2070 log.go:172] (0xc0009b1600) Go away received\nI0506 20:25:19.429813 2070 log.go:172] (0xc0009b1600) (0xc0006dd4a0) Stream removed, broadcasting: 1\nI0506 20:25:19.429829 2070 log.go:172] (0xc0009b1600) (0xc0006eae60) Stream removed, broadcasting: 3\nI0506 20:25:19.429837 2070 log.go:172] (0xc0009b1600) (0xc0006ddea0) Stream removed, broadcasting: 5\n" May 6 20:25:19.433: INFO: stdout: "" May 6 20:25:19.434: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8422 execpod-affinitysb5mp -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31881/ ; done' May 6 20:25:19.880: INFO: stderr: "I0506 20:25:19.697255 2087 log.go:172] (0xc000a871e0) (0xc000be63c0) Create stream\nI0506 20:25:19.697321 2087 log.go:172] (0xc000a871e0) (0xc000be63c0) Stream added, broadcasting: 1\nI0506 20:25:19.702969 2087 log.go:172] (0xc000a871e0) Reply frame received for 1\nI0506 20:25:19.703011 2087 log.go:172] (0xc000a871e0) (0xc00056a500) Create stream\nI0506 20:25:19.703022 2087 log.go:172] (0xc000a871e0) (0xc00056a500) Stream added, broadcasting: 3\nI0506 20:25:19.704017 2087 log.go:172] (0xc000a871e0) Reply frame received for 3\nI0506 20:25:19.704086 2087 log.go:172] (0xc000a871e0) (0xc00055e1e0) Create stream\nI0506 20:25:19.704118 2087 log.go:172] (0xc000a871e0) (0xc00055e1e0) Stream added, broadcasting: 5\nI0506 20:25:19.704982 2087 log.go:172] (0xc000a871e0) Reply frame received for 5\nI0506 20:25:19.773007 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.773047 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.773064 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.773090 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.773105 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.773316 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31881/\nI0506 20:25:19.778372 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.778398 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.778409 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.778790 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.778818 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.778832 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.778846 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.778853 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.778863 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31881/\nI0506 20:25:19.784974 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.785000 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.785020 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.785807 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.785838 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.785852 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.785876 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.785894 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.785909 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31881/\nI0506 20:25:19.793740 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.793764 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.793775 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.793878 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.793899 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.793916 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.794044 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.794059 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.794075 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31881/\nI0506 20:25:19.801375 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.801404 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.801428 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.801770 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.801792 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.801821 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\nI0506 20:25:19.801845 2087 log.go:172] (0xc000a871e0) Data frame received for 5\n+ I0506 20:25:19.801855 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.801892 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\nI0506 20:25:19.801919 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.801934 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\necho\nI0506 20:25:19.801968 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\nI0506 20:25:19.801987 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.802000 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.802014 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.802394 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.802486 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.802503 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\nI0506 20:25:19.802512 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.802519 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31881/\nI0506 20:25:19.802537 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\nI0506 20:25:19.810406 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.810437 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.810467 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.811330 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.811350 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.811361 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.811374 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.811381 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.811396 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31881/\nI0506 20:25:19.815748 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.815776 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.815802 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.816179 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.816203 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.816214 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\nI0506 20:25:19.816222 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.816227 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeoutI0506 20:25:19.816259 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\nI0506 20:25:19.816301 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.816356 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.816374 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\nI0506 20:25:19.816386 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.816393 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.816402 2087 log.go:172] (0xc00056a500) (3) Data frame sent\n 2 http://172.17.0.13:31881/\nI0506 20:25:19.821559 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.821575 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.821614 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.822074 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.822089 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.822098 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.822104 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.822109 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.822115 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\nI0506 20:25:19.822120 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.822125 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31881/\nI0506 20:25:19.822141 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\nI0506 20:25:19.826916 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.826935 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.826953 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.827555 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.827573 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.827593 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31881/\nI0506 20:25:19.827608 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.827618 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.827627 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.831467 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.831481 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.831491 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.831866 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.831891 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.831928 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.831944 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31881/\nI0506 20:25:19.831960 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.831993 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.835131 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.835143 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.835152 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.835621 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.835641 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.835650 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.835662 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.835668 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.835676 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31881/\nI0506 20:25:19.839443 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.839464 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.839484 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.839866 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.839901 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.839928 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31881/\nI0506 20:25:19.839966 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.839983 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.840013 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.845773 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.845787 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.845800 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.846268 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.846293 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.846329 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.846353 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.846391 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.846429 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31881/\nI0506 20:25:19.852415 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.852439 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.852452 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.853039 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.853079 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.853102 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.853313 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.853332 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.853353 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31881/\nI0506 20:25:19.856623 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.856638 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.856652 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.857240 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.857280 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.857295 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.857314 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.857325 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31881/I0506 20:25:19.857358 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.857408 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.857427 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.857454 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\n\nI0506 20:25:19.863672 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.863697 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.863713 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.864281 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.864314 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.864361 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.864381 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.864400 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.864411 2087 log.go:172] (0xc00055e1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31881/\nI0506 20:25:19.868473 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.868503 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.868525 2087 log.go:172] (0xc00056a500) (3) Data frame sent\nI0506 20:25:19.869102 2087 log.go:172] (0xc000a871e0) Data frame received for 3\nI0506 20:25:19.869324 2087 log.go:172] (0xc00056a500) (3) Data frame handling\nI0506 20:25:19.869351 2087 log.go:172] (0xc000a871e0) Data frame received for 5\nI0506 20:25:19.869360 2087 log.go:172] (0xc00055e1e0) (5) Data frame handling\nI0506 20:25:19.875919 2087 log.go:172] (0xc000a871e0) Data frame received for 1\nI0506 20:25:19.875938 2087 log.go:172] (0xc000be63c0) (1) Data frame handling\nI0506 20:25:19.875950 2087 log.go:172] (0xc000be63c0) (1) Data frame sent\nI0506 20:25:19.875960 2087 log.go:172] (0xc000a871e0) (0xc000be63c0) Stream removed, broadcasting: 1\nI0506 20:25:19.876285 2087 log.go:172] (0xc000a871e0) (0xc000be63c0) Stream removed, broadcasting: 1\nI0506 20:25:19.876304 2087 log.go:172] (0xc000a871e0) (0xc00056a500) Stream removed, broadcasting: 3\nI0506 20:25:19.876366 2087 log.go:172] (0xc000a871e0) Go away received\nI0506 20:25:19.876451 2087 log.go:172] (0xc000a871e0) (0xc00055e1e0) Stream removed, broadcasting: 5\n" May 6 20:25:19.880: INFO: stdout: "\naffinity-nodeport-timeout-kqxht\naffinity-nodeport-timeout-kqxht\naffinity-nodeport-timeout-kqxht\naffinity-nodeport-timeout-kqxht\naffinity-nodeport-timeout-kqxht\naffinity-nodeport-timeout-kqxht\naffinity-nodeport-timeout-kqxht\naffinity-nodeport-timeout-kqxht\naffinity-nodeport-timeout-kqxht\naffinity-nodeport-timeout-kqxht\naffinity-nodeport-timeout-kqxht\naffinity-nodeport-timeout-kqxht\naffinity-nodeport-timeout-kqxht\naffinity-nodeport-timeout-kqxht\naffinity-nodeport-timeout-kqxht\naffinity-nodeport-timeout-kqxht" May 6 20:25:19.880: INFO: Received response from host: May 6 20:25:19.880: INFO: Received response from host: affinity-nodeport-timeout-kqxht May 6 20:25:19.880: INFO: Received response from host: affinity-nodeport-timeout-kqxht May 6 20:25:19.880: INFO: Received response from host: affinity-nodeport-timeout-kqxht May 6 20:25:19.880: INFO: Received response from host: affinity-nodeport-timeout-kqxht May 6 20:25:19.880: INFO: Received response from host: affinity-nodeport-timeout-kqxht May 6 20:25:19.880: INFO: Received response from host: affinity-nodeport-timeout-kqxht May 6 20:25:19.880: INFO: Received response from host: affinity-nodeport-timeout-kqxht May 6 20:25:19.880: INFO: Received response from host: affinity-nodeport-timeout-kqxht May 6 20:25:19.880: INFO: Received response from host: affinity-nodeport-timeout-kqxht May 6 20:25:19.880: INFO: Received response from host: affinity-nodeport-timeout-kqxht May 6 20:25:19.880: INFO: Received response from host: affinity-nodeport-timeout-kqxht May 6 20:25:19.880: INFO: Received response from host: affinity-nodeport-timeout-kqxht May 6 20:25:19.880: INFO: Received response from host: affinity-nodeport-timeout-kqxht May 6 20:25:19.880: INFO: Received response from host: affinity-nodeport-timeout-kqxht May 6 20:25:19.880: INFO: Received response from host: affinity-nodeport-timeout-kqxht May 6 20:25:19.880: INFO: Received response from host: affinity-nodeport-timeout-kqxht May 6 20:25:19.880: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8422 execpod-affinitysb5mp -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:31881/' May 6 20:25:20.059: INFO: stderr: "I0506 20:25:19.997080 2109 log.go:172] (0xc00003b290) (0xc000828fa0) Create stream\nI0506 20:25:19.997316 2109 log.go:172] (0xc00003b290) (0xc000828fa0) Stream added, broadcasting: 1\nI0506 20:25:20.001668 2109 log.go:172] (0xc00003b290) Reply frame received for 1\nI0506 20:25:20.001703 2109 log.go:172] (0xc00003b290) (0xc00081bc20) Create stream\nI0506 20:25:20.001715 2109 log.go:172] (0xc00003b290) (0xc00081bc20) Stream added, broadcasting: 3\nI0506 20:25:20.002297 2109 log.go:172] (0xc00003b290) Reply frame received for 3\nI0506 20:25:20.002323 2109 log.go:172] (0xc00003b290) (0xc0007105a0) Create stream\nI0506 20:25:20.002330 2109 log.go:172] (0xc00003b290) (0xc0007105a0) Stream added, broadcasting: 5\nI0506 20:25:20.002952 2109 log.go:172] (0xc00003b290) Reply frame received for 5\nI0506 20:25:20.049604 2109 log.go:172] (0xc00003b290) Data frame received for 5\nI0506 20:25:20.049624 2109 log.go:172] (0xc0007105a0) (5) Data frame handling\nI0506 20:25:20.049636 2109 log.go:172] (0xc0007105a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31881/\nI0506 20:25:20.052935 2109 log.go:172] (0xc00003b290) Data frame received for 3\nI0506 20:25:20.052959 2109 log.go:172] (0xc00081bc20) (3) Data frame handling\nI0506 20:25:20.052977 2109 log.go:172] (0xc00081bc20) (3) Data frame sent\nI0506 20:25:20.053861 2109 log.go:172] (0xc00003b290) Data frame received for 3\nI0506 20:25:20.053880 2109 log.go:172] (0xc00081bc20) (3) Data frame handling\nI0506 20:25:20.053912 2109 log.go:172] (0xc00003b290) Data frame received for 5\nI0506 20:25:20.053939 2109 log.go:172] (0xc0007105a0) (5) Data frame handling\nI0506 20:25:20.055282 2109 log.go:172] (0xc00003b290) Data frame received for 1\nI0506 20:25:20.055300 2109 log.go:172] (0xc000828fa0) (1) Data frame handling\nI0506 20:25:20.055319 2109 log.go:172] (0xc000828fa0) (1) Data frame sent\nI0506 20:25:20.055494 2109 log.go:172] (0xc00003b290) (0xc000828fa0) Stream removed, broadcasting: 1\nI0506 20:25:20.055676 2109 log.go:172] (0xc00003b290) Go away received\nI0506 20:25:20.055748 2109 log.go:172] (0xc00003b290) (0xc000828fa0) Stream removed, broadcasting: 1\nI0506 20:25:20.055761 2109 log.go:172] (0xc00003b290) (0xc00081bc20) Stream removed, broadcasting: 3\nI0506 20:25:20.055766 2109 log.go:172] (0xc00003b290) (0xc0007105a0) Stream removed, broadcasting: 5\n" May 6 20:25:20.059: INFO: stdout: "affinity-nodeport-timeout-kqxht" May 6 20:25:35.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8422 execpod-affinitysb5mp -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:31881/' May 6 20:25:35.296: INFO: stderr: "I0506 20:25:35.211681 2129 log.go:172] (0xc00003a0b0) (0xc000708000) Create stream\nI0506 20:25:35.211741 2129 log.go:172] (0xc00003a0b0) (0xc000708000) Stream added, broadcasting: 1\nI0506 20:25:35.213725 2129 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0506 20:25:35.213775 2129 log.go:172] (0xc00003a0b0) (0xc000708500) Create stream\nI0506 20:25:35.213790 2129 log.go:172] (0xc00003a0b0) (0xc000708500) Stream added, broadcasting: 3\nI0506 20:25:35.215204 2129 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0506 20:25:35.215240 2129 log.go:172] (0xc00003a0b0) (0xc000708a00) Create stream\nI0506 20:25:35.215250 2129 log.go:172] (0xc00003a0b0) (0xc000708a00) Stream added, broadcasting: 5\nI0506 20:25:35.217721 2129 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0506 20:25:35.284734 2129 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0506 20:25:35.284775 2129 log.go:172] (0xc000708a00) (5) Data frame handling\nI0506 20:25:35.284796 2129 log.go:172] (0xc000708a00) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31881/\nI0506 20:25:35.288150 2129 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0506 20:25:35.288173 2129 log.go:172] (0xc000708500) (3) Data frame handling\nI0506 20:25:35.288195 2129 log.go:172] (0xc000708500) (3) Data frame sent\nI0506 20:25:35.289050 2129 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0506 20:25:35.289073 2129 log.go:172] (0xc000708a00) (5) Data frame handling\nI0506 20:25:35.289108 2129 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0506 20:25:35.289280 2129 log.go:172] (0xc000708500) (3) Data frame handling\nI0506 20:25:35.290719 2129 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0506 20:25:35.290734 2129 log.go:172] (0xc000708000) (1) Data frame handling\nI0506 20:25:35.290743 2129 log.go:172] (0xc000708000) (1) Data frame sent\nI0506 20:25:35.290760 2129 log.go:172] (0xc00003a0b0) (0xc000708000) Stream removed, broadcasting: 1\nI0506 20:25:35.290780 2129 log.go:172] (0xc00003a0b0) Go away received\nI0506 20:25:35.291169 2129 log.go:172] (0xc00003a0b0) (0xc000708000) Stream removed, broadcasting: 1\nI0506 20:25:35.291186 2129 log.go:172] (0xc00003a0b0) (0xc000708500) Stream removed, broadcasting: 3\nI0506 20:25:35.291194 2129 log.go:172] (0xc00003a0b0) (0xc000708a00) Stream removed, broadcasting: 5\n" May 6 20:25:35.296: INFO: stdout: "affinity-nodeport-timeout-jk6nf" May 6 20:25:35.296: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-8422, will wait for the garbage collector to delete the pods May 6 20:25:35.440: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 10.889736ms May 6 20:25:36.241: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 800.181898ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:25:46.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8422" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:55.665 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":113,"skipped":2028,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:25:46.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-48bb15b7-2a2a-445b-a368-381693bf6a83 STEP: Creating a pod to test consume configMaps May 6 20:25:46.474: INFO: Waiting up to 5m0s for pod "pod-configmaps-624cc5ff-0336-44c8-afc8-3afabacc1bf2" in namespace "configmap-1218" to be "Succeeded or Failed" May 6 20:25:46.491: INFO: Pod "pod-configmaps-624cc5ff-0336-44c8-afc8-3afabacc1bf2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.703181ms May 6 20:25:48.496: INFO: Pod "pod-configmaps-624cc5ff-0336-44c8-afc8-3afabacc1bf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021927898s May 6 20:25:50.500: INFO: Pod "pod-configmaps-624cc5ff-0336-44c8-afc8-3afabacc1bf2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026488892s May 6 20:25:52.504: INFO: Pod "pod-configmaps-624cc5ff-0336-44c8-afc8-3afabacc1bf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030719983s STEP: Saw pod success May 6 20:25:52.505: INFO: Pod "pod-configmaps-624cc5ff-0336-44c8-afc8-3afabacc1bf2" satisfied condition "Succeeded or Failed" May 6 20:25:52.507: INFO: Trying to get logs from node latest-worker pod pod-configmaps-624cc5ff-0336-44c8-afc8-3afabacc1bf2 container configmap-volume-test: STEP: delete the pod May 6 20:25:52.547: INFO: Waiting for pod pod-configmaps-624cc5ff-0336-44c8-afc8-3afabacc1bf2 to disappear May 6 20:25:52.560: INFO: Pod pod-configmaps-624cc5ff-0336-44c8-afc8-3afabacc1bf2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:25:52.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1218" for this suite. • [SLOW TEST:6.255 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":114,"skipped":2044,"failed":0} SSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:25:52.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:25:52.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-6994" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":288,"completed":115,"skipped":2048,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:25:52.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 20:25:52.780: INFO: Creating deployment "test-recreate-deployment" May 6 20:25:52.794: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 6 20:25:52.803: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created May 6 20:25:54.951: INFO: Waiting deployment "test-recreate-deployment" to complete May 6 20:25:54.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393552, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393552, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393552, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393552, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 20:25:57.032: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393552, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393552, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393552, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724393552, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 20:25:58.958: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 6 20:25:59.038: INFO: Updating deployment test-recreate-deployment May 6 20:25:59.038: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 6 20:26:00.358: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3963 /apis/apps/v1/namespaces/deployment-3963/deployments/test-recreate-deployment 1acb6408-7a6a-49ca-9f36-6264d0895ae4 2090644 2 2020-05-06 20:25:52 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-06 20:25:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-06 20:26:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033da8e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-06 20:26:00 +0000 UTC,LastTransitionTime:2020-05-06 20:26:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-06 20:26:00 +0000 UTC,LastTransitionTime:2020-05-06 20:25:52 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 6 20:26:00.372: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-3963 /apis/apps/v1/namespaces/deployment-3963/replicasets/test-recreate-deployment-d5667d9c7 feee59f8-d3ea-48e6-87a5-5342cb14cd9f 2090641 1 2020-05-06 20:25:59 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 1acb6408-7a6a-49ca-9f36-6264d0895ae4 0xc0033dade0 0xc0033dade1}] [] [{kube-controller-manager Update apps/v1 2020-05-06 20:26:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1acb6408-7a6a-49ca-9f36-6264d0895ae4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033dae58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 20:26:00.372: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 6 20:26:00.372: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-3963 /apis/apps/v1/namespaces/deployment-3963/replicasets/test-recreate-deployment-6d65b9f6d8 b923f5fb-582b-4d4d-8265-11fb9a98df9d 2090631 2 2020-05-06 20:25:52 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 1acb6408-7a6a-49ca-9f36-6264d0895ae4 0xc0033dace7 0xc0033dace8}] [] [{kube-controller-manager Update apps/v1 2020-05-06 20:25:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1acb6408-7a6a-49ca-9f36-6264d0895ae4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033dad78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 20:26:00.377: INFO: Pod "test-recreate-deployment-d5667d9c7-j2h7p" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-j2h7p test-recreate-deployment-d5667d9c7- deployment-3963 /api/v1/namespaces/deployment-3963/pods/test-recreate-deployment-d5667d9c7-j2h7p c7a966ac-d47c-4c5c-89a0-1be4f6af82ae 2090645 0 2020-05-06 20:25:59 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 feee59f8-d3ea-48e6-87a5-5342cb14cd9f 0xc0033a7950 0xc0033a7951}] [] [{kube-controller-manager Update v1 2020-05-06 20:25:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"feee59f8-d3ea-48e6-87a5-5342cb14cd9f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-06 20:26:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sqgrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sqgrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sqgrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 20:26:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 20:26:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 20:26:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 20:25:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-06 20:26:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:26:00.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3963" for this suite. • [SLOW TEST:7.693 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":116,"skipped":2064,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:26:00.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-6373 STEP: creating a selector STEP: Creating the service pods in kubernetes May 6 20:26:00.441: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 6 20:26:00.514: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 20:26:02.517: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 20:26:04.517: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 20:26:07.071: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 20:26:09.027: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:26:10.544: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:26:12.518: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:26:14.518: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:26:16.518: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:26:18.553: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:26:20.787: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:26:22.518: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:26:24.517: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:26:26.518: INFO: The status of Pod netserver-0 is Running (Ready = true) May 6 20:26:26.682: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 6 20:26:37.598: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.117:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6373 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:26:37.598: INFO: >>> kubeConfig: /root/.kube/config I0506 20:26:37.624239 7 log.go:172] (0xc00242b550) (0xc000c7bae0) Create stream I0506 20:26:37.624272 7 log.go:172] (0xc00242b550) (0xc000c7bae0) Stream added, broadcasting: 1 I0506 20:26:37.626321 7 log.go:172] (0xc00242b550) Reply frame received for 1 I0506 20:26:37.626375 7 log.go:172] (0xc00242b550) (0xc000c7bea0) Create stream I0506 20:26:37.626394 7 log.go:172] (0xc00242b550) (0xc000c7bea0) Stream added, broadcasting: 3 I0506 20:26:37.627092 7 log.go:172] (0xc00242b550) Reply frame received for 3 I0506 20:26:37.627127 7 log.go:172] (0xc00242b550) (0xc00116c0a0) Create stream I0506 20:26:37.627139 7 log.go:172] (0xc00242b550) (0xc00116c0a0) Stream added, broadcasting: 5 I0506 20:26:37.627742 7 log.go:172] (0xc00242b550) Reply frame received for 5 I0506 20:26:37.689932 7 log.go:172] (0xc00242b550) Data frame received for 3 I0506 20:26:37.689992 7 log.go:172] (0xc000c7bea0) (3) Data frame handling I0506 20:26:37.690019 7 log.go:172] (0xc000c7bea0) (3) Data frame sent I0506 20:26:37.690039 7 log.go:172] (0xc00242b550) Data frame received for 3 I0506 20:26:37.690199 7 log.go:172] (0xc000c7bea0) (3) Data frame handling I0506 20:26:37.690246 7 log.go:172] (0xc00242b550) Data frame received for 5 I0506 20:26:37.690266 7 log.go:172] (0xc00116c0a0) (5) Data frame handling I0506 20:26:37.691956 7 log.go:172] (0xc00242b550) Data frame received for 1 I0506 20:26:37.691975 7 log.go:172] (0xc000c7bae0) (1) Data frame handling I0506 20:26:37.692000 7 log.go:172] (0xc000c7bae0) (1) Data frame sent I0506 20:26:37.692023 7 log.go:172] (0xc00242b550) (0xc000c7bae0) Stream removed, broadcasting: 1 I0506 20:26:37.692064 7 log.go:172] (0xc00242b550) Go away received I0506 20:26:37.692195 7 log.go:172] (0xc00242b550) (0xc000c7bae0) Stream removed, broadcasting: 1 I0506 20:26:37.692227 7 log.go:172] (0xc00242b550) (0xc000c7bea0) Stream removed, broadcasting: 3 I0506 20:26:37.692239 7 log.go:172] (0xc00242b550) (0xc00116c0a0) Stream removed, broadcasting: 5 May 6 20:26:37.692: INFO: Found all expected endpoints: [netserver-0] May 6 20:26:37.695: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.200:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6373 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:26:37.695: INFO: >>> kubeConfig: /root/.kube/config I0506 20:26:37.720733 7 log.go:172] (0xc002d408f0) (0xc001497720) Create stream I0506 20:26:37.720766 7 log.go:172] (0xc002d408f0) (0xc001497720) Stream added, broadcasting: 1 I0506 20:26:37.722601 7 log.go:172] (0xc002d408f0) Reply frame received for 1 I0506 20:26:37.722634 7 log.go:172] (0xc002d408f0) (0xc000c7bf40) Create stream I0506 20:26:37.722653 7 log.go:172] (0xc002d408f0) (0xc000c7bf40) Stream added, broadcasting: 3 I0506 20:26:37.723468 7 log.go:172] (0xc002d408f0) Reply frame received for 3 I0506 20:26:37.723496 7 log.go:172] (0xc002d408f0) (0xc001783720) Create stream I0506 20:26:37.723507 7 log.go:172] (0xc002d408f0) (0xc001783720) Stream added, broadcasting: 5 I0506 20:26:37.724148 7 log.go:172] (0xc002d408f0) Reply frame received for 5 I0506 20:26:37.783574 7 log.go:172] (0xc002d408f0) Data frame received for 5 I0506 20:26:37.783615 7 log.go:172] (0xc001783720) (5) Data frame handling I0506 20:26:37.783643 7 log.go:172] (0xc002d408f0) Data frame received for 3 I0506 20:26:37.783686 7 log.go:172] (0xc000c7bf40) (3) Data frame handling I0506 20:26:37.783724 7 log.go:172] (0xc000c7bf40) (3) Data frame sent I0506 20:26:37.783776 7 log.go:172] (0xc002d408f0) Data frame received for 3 I0506 20:26:37.783820 7 log.go:172] (0xc000c7bf40) (3) Data frame handling I0506 20:26:37.785344 7 log.go:172] (0xc002d408f0) Data frame received for 1 I0506 20:26:37.785394 7 log.go:172] (0xc001497720) (1) Data frame handling I0506 20:26:37.785456 7 log.go:172] (0xc001497720) (1) Data frame sent I0506 20:26:37.785486 7 log.go:172] (0xc002d408f0) (0xc001497720) Stream removed, broadcasting: 1 I0506 20:26:37.785514 7 log.go:172] (0xc002d408f0) Go away received I0506 20:26:37.785660 7 log.go:172] (0xc002d408f0) (0xc001497720) Stream removed, broadcasting: 1 I0506 20:26:37.785772 7 log.go:172] (0xc002d408f0) (0xc000c7bf40) Stream removed, broadcasting: 3 I0506 20:26:37.785828 7 log.go:172] (0xc002d408f0) (0xc001783720) Stream removed, broadcasting: 5 May 6 20:26:37.785: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:26:37.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6373" for this suite. • [SLOW TEST:38.261 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":117,"skipped":2071,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:26:38.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 6 20:26:41.374: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4583cf89-cd5d-47d3-81eb-196b1add0e7e" in namespace "projected-2745" to be "Succeeded or Failed" May 6 20:26:41.842: INFO: Pod "downwardapi-volume-4583cf89-cd5d-47d3-81eb-196b1add0e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 467.70085ms May 6 20:26:43.907: INFO: Pod "downwardapi-volume-4583cf89-cd5d-47d3-81eb-196b1add0e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.532675478s May 6 20:26:46.318: INFO: Pod "downwardapi-volume-4583cf89-cd5d-47d3-81eb-196b1add0e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.943872861s May 6 20:26:48.733: INFO: Pod "downwardapi-volume-4583cf89-cd5d-47d3-81eb-196b1add0e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.359119194s May 6 20:26:50.918: INFO: Pod "downwardapi-volume-4583cf89-cd5d-47d3-81eb-196b1add0e7e": Phase="Running", Reason="", readiness=true. Elapsed: 9.543650955s May 6 20:26:52.922: INFO: Pod "downwardapi-volume-4583cf89-cd5d-47d3-81eb-196b1add0e7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.548018087s STEP: Saw pod success May 6 20:26:52.922: INFO: Pod "downwardapi-volume-4583cf89-cd5d-47d3-81eb-196b1add0e7e" satisfied condition "Succeeded or Failed" May 6 20:26:52.925: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-4583cf89-cd5d-47d3-81eb-196b1add0e7e container client-container: STEP: delete the pod May 6 20:26:52.993: INFO: Waiting for pod downwardapi-volume-4583cf89-cd5d-47d3-81eb-196b1add0e7e to disappear May 6 20:26:53.007: INFO: Pod downwardapi-volume-4583cf89-cd5d-47d3-81eb-196b1add0e7e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:26:53.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2745" for this suite. • [SLOW TEST:14.376 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":288,"completed":118,"skipped":2075,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:26:53.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 20:26:53.138: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:26:54.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8148" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":288,"completed":119,"skipped":2077,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:26:54.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8188 STEP: creating service affinity-nodeport in namespace services-8188 STEP: creating replication controller affinity-nodeport in namespace services-8188 I0506 20:26:54.305625 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-8188, replica count: 3 I0506 20:26:57.356074 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 20:27:00.356241 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 20:27:03.356476 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 20:27:03.365: INFO: Creating new exec pod May 6 20:27:12.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8188 execpod-affinityxqmj6 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' May 6 20:27:12.639: INFO: stderr: "I0506 20:27:12.562557 2150 log.go:172] (0xc0006b08f0) (0xc0003b54a0) Create stream\nI0506 20:27:12.562635 2150 log.go:172] (0xc0006b08f0) (0xc0003b54a0) Stream added, broadcasting: 1\nI0506 20:27:12.566455 2150 log.go:172] (0xc0006b08f0) Reply frame received for 1\nI0506 20:27:12.566513 2150 log.go:172] (0xc0006b08f0) (0xc00053c1e0) Create stream\nI0506 20:27:12.566537 2150 log.go:172] (0xc0006b08f0) (0xc00053c1e0) Stream added, broadcasting: 3\nI0506 20:27:12.567572 2150 log.go:172] (0xc0006b08f0) Reply frame received for 3\nI0506 20:27:12.567618 2150 log.go:172] (0xc0006b08f0) (0xc0003b5ae0) Create stream\nI0506 20:27:12.567648 2150 log.go:172] (0xc0006b08f0) (0xc0003b5ae0) Stream added, broadcasting: 5\nI0506 20:27:12.568605 2150 log.go:172] (0xc0006b08f0) Reply frame received for 5\nI0506 20:27:12.632908 2150 log.go:172] (0xc0006b08f0) Data frame received for 5\nI0506 20:27:12.632935 2150 log.go:172] (0xc0003b5ae0) (5) Data frame handling\nI0506 20:27:12.632952 2150 log.go:172] (0xc0003b5ae0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0506 20:27:12.633693 2150 log.go:172] (0xc0006b08f0) Data frame received for 5\nI0506 20:27:12.633730 2150 log.go:172] (0xc0003b5ae0) (5) Data frame handling\nI0506 20:27:12.633756 2150 log.go:172] (0xc0003b5ae0) (5) Data frame sent\nI0506 20:27:12.633772 2150 log.go:172] (0xc0006b08f0) Data frame received for 5\nI0506 20:27:12.633783 2150 log.go:172] (0xc0003b5ae0) (5) Data frame handling\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0506 20:27:12.633945 2150 log.go:172] (0xc0006b08f0) Data frame received for 3\nI0506 20:27:12.633971 2150 log.go:172] (0xc00053c1e0) (3) Data frame handling\nI0506 20:27:12.635258 2150 log.go:172] (0xc0006b08f0) Data frame received for 1\nI0506 20:27:12.635274 2150 log.go:172] (0xc0003b54a0) (1) Data frame handling\nI0506 20:27:12.635289 2150 log.go:172] (0xc0003b54a0) (1) Data frame sent\nI0506 20:27:12.635383 2150 log.go:172] (0xc0006b08f0) (0xc0003b54a0) Stream removed, broadcasting: 1\nI0506 20:27:12.635405 2150 log.go:172] (0xc0006b08f0) Go away received\nI0506 20:27:12.635833 2150 log.go:172] (0xc0006b08f0) (0xc0003b54a0) Stream removed, broadcasting: 1\nI0506 20:27:12.635856 2150 log.go:172] (0xc0006b08f0) (0xc00053c1e0) Stream removed, broadcasting: 3\nI0506 20:27:12.635866 2150 log.go:172] (0xc0006b08f0) (0xc0003b5ae0) Stream removed, broadcasting: 5\n" May 6 20:27:12.639: INFO: stdout: "" May 6 20:27:12.640: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8188 execpod-affinityxqmj6 -- /bin/sh -x -c nc -zv -t -w 2 10.97.126.69 80' May 6 20:27:12.822: INFO: stderr: "I0506 20:27:12.759897 2171 log.go:172] (0xc0002f2000) (0xc0009a55e0) Create stream\nI0506 20:27:12.759940 2171 log.go:172] (0xc0002f2000) (0xc0009a55e0) Stream added, broadcasting: 1\nI0506 20:27:12.762208 2171 log.go:172] (0xc0002f2000) Reply frame received for 1\nI0506 20:27:12.762245 2171 log.go:172] (0xc0002f2000) (0xc000996e60) Create stream\nI0506 20:27:12.762263 2171 log.go:172] (0xc0002f2000) (0xc000996e60) Stream added, broadcasting: 3\nI0506 20:27:12.765054 2171 log.go:172] (0xc0002f2000) Reply frame received for 3\nI0506 20:27:12.765087 2171 log.go:172] (0xc0002f2000) (0xc00098a5a0) Create stream\nI0506 20:27:12.765101 2171 log.go:172] (0xc0002f2000) (0xc00098a5a0) Stream added, broadcasting: 5\nI0506 20:27:12.766287 2171 log.go:172] (0xc0002f2000) Reply frame received for 5\nI0506 20:27:12.815775 2171 log.go:172] (0xc0002f2000) Data frame received for 3\nI0506 20:27:12.815799 2171 log.go:172] (0xc000996e60) (3) Data frame handling\nI0506 20:27:12.815917 2171 log.go:172] (0xc0002f2000) Data frame received for 5\nI0506 20:27:12.815930 2171 log.go:172] (0xc00098a5a0) (5) Data frame handling\nI0506 20:27:12.815946 2171 log.go:172] (0xc00098a5a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.97.126.69 80\nConnection to 10.97.126.69 80 port [tcp/http] succeeded!\nI0506 20:27:12.815990 2171 log.go:172] (0xc0002f2000) Data frame received for 5\nI0506 20:27:12.816014 2171 log.go:172] (0xc00098a5a0) (5) Data frame handling\nI0506 20:27:12.817295 2171 log.go:172] (0xc0002f2000) Data frame received for 1\nI0506 20:27:12.817309 2171 log.go:172] (0xc0009a55e0) (1) Data frame handling\nI0506 20:27:12.817317 2171 log.go:172] (0xc0009a55e0) (1) Data frame sent\nI0506 20:27:12.817327 2171 log.go:172] (0xc0002f2000) (0xc0009a55e0) Stream removed, broadcasting: 1\nI0506 20:27:12.817337 2171 log.go:172] (0xc0002f2000) Go away received\nI0506 20:27:12.817683 2171 log.go:172] (0xc0002f2000) (0xc0009a55e0) Stream removed, broadcasting: 1\nI0506 20:27:12.817699 2171 log.go:172] (0xc0002f2000) (0xc000996e60) Stream removed, broadcasting: 3\nI0506 20:27:12.817709 2171 log.go:172] (0xc0002f2000) (0xc00098a5a0) Stream removed, broadcasting: 5\n" May 6 20:27:12.822: INFO: stdout: "" May 6 20:27:12.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8188 execpod-affinityxqmj6 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31915' May 6 20:27:13.003: INFO: stderr: "I0506 20:27:12.946801 2191 log.go:172] (0xc000aa7ce0) (0xc000a98280) Create stream\nI0506 20:27:12.946850 2191 log.go:172] (0xc000aa7ce0) (0xc000a98280) Stream added, broadcasting: 1\nI0506 20:27:12.948755 2191 log.go:172] (0xc000aa7ce0) Reply frame received for 1\nI0506 20:27:12.948791 2191 log.go:172] (0xc000aa7ce0) (0xc000602fa0) Create stream\nI0506 20:27:12.948802 2191 log.go:172] (0xc000aa7ce0) (0xc000602fa0) Stream added, broadcasting: 3\nI0506 20:27:12.949576 2191 log.go:172] (0xc000aa7ce0) Reply frame received for 3\nI0506 20:27:12.949598 2191 log.go:172] (0xc000aa7ce0) (0xc0006032c0) Create stream\nI0506 20:27:12.949610 2191 log.go:172] (0xc000aa7ce0) (0xc0006032c0) Stream added, broadcasting: 5\nI0506 20:27:12.950264 2191 log.go:172] (0xc000aa7ce0) Reply frame received for 5\nI0506 20:27:12.996149 2191 log.go:172] (0xc000aa7ce0) Data frame received for 5\nI0506 20:27:12.996175 2191 log.go:172] (0xc0006032c0) (5) Data frame handling\nI0506 20:27:12.996192 2191 log.go:172] (0xc0006032c0) (5) Data frame sent\nI0506 20:27:12.996201 2191 log.go:172] (0xc000aa7ce0) Data frame received for 5\nI0506 20:27:12.996212 2191 log.go:172] (0xc0006032c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31915\nConnection to 172.17.0.13 31915 port [tcp/31915] succeeded!\nI0506 20:27:12.996246 2191 log.go:172] (0xc0006032c0) (5) Data frame sent\nI0506 20:27:12.996350 2191 log.go:172] (0xc000aa7ce0) Data frame received for 5\nI0506 20:27:12.996381 2191 log.go:172] (0xc0006032c0) (5) Data frame handling\nI0506 20:27:12.996574 2191 log.go:172] (0xc000aa7ce0) Data frame received for 3\nI0506 20:27:12.996600 2191 log.go:172] (0xc000602fa0) (3) Data frame handling\nI0506 20:27:12.998352 2191 log.go:172] (0xc000aa7ce0) Data frame received for 1\nI0506 20:27:12.998380 2191 log.go:172] (0xc000a98280) (1) Data frame handling\nI0506 20:27:12.998395 2191 log.go:172] (0xc000a98280) (1) Data frame sent\nI0506 20:27:12.998411 2191 log.go:172] (0xc000aa7ce0) (0xc000a98280) Stream removed, broadcasting: 1\nI0506 20:27:12.998425 2191 log.go:172] (0xc000aa7ce0) Go away received\nI0506 20:27:12.998791 2191 log.go:172] (0xc000aa7ce0) (0xc000a98280) Stream removed, broadcasting: 1\nI0506 20:27:12.998817 2191 log.go:172] (0xc000aa7ce0) (0xc000602fa0) Stream removed, broadcasting: 3\nI0506 20:27:12.998833 2191 log.go:172] (0xc000aa7ce0) (0xc0006032c0) Stream removed, broadcasting: 5\n" May 6 20:27:13.003: INFO: stdout: "" May 6 20:27:13.003: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8188 execpod-affinityxqmj6 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31915' May 6 20:27:13.439: INFO: stderr: "I0506 20:27:13.363610 2214 log.go:172] (0xc000a456b0) (0xc000654fa0) Create stream\nI0506 20:27:13.363667 2214 log.go:172] (0xc000a456b0) (0xc000654fa0) Stream added, broadcasting: 1\nI0506 20:27:13.370609 2214 log.go:172] (0xc000a456b0) Reply frame received for 1\nI0506 20:27:13.370651 2214 log.go:172] (0xc000a456b0) (0xc00061dc20) Create stream\nI0506 20:27:13.370664 2214 log.go:172] (0xc000a456b0) (0xc00061dc20) Stream added, broadcasting: 3\nI0506 20:27:13.371540 2214 log.go:172] (0xc000a456b0) Reply frame received for 3\nI0506 20:27:13.371572 2214 log.go:172] (0xc000a456b0) (0xc0005edcc0) Create stream\nI0506 20:27:13.371582 2214 log.go:172] (0xc000a456b0) (0xc0005edcc0) Stream added, broadcasting: 5\nI0506 20:27:13.372282 2214 log.go:172] (0xc000a456b0) Reply frame received for 5\nI0506 20:27:13.434048 2214 log.go:172] (0xc000a456b0) Data frame received for 5\nI0506 20:27:13.434070 2214 log.go:172] (0xc0005edcc0) (5) Data frame handling\nI0506 20:27:13.434088 2214 log.go:172] (0xc0005edcc0) (5) Data frame sent\nI0506 20:27:13.434094 2214 log.go:172] (0xc000a456b0) Data frame received for 5\nI0506 20:27:13.434100 2214 log.go:172] (0xc0005edcc0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31915\nConnection to 172.17.0.12 31915 port [tcp/31915] succeeded!\nI0506 20:27:13.434233 2214 log.go:172] (0xc000a456b0) Data frame received for 3\nI0506 20:27:13.434246 2214 log.go:172] (0xc00061dc20) (3) Data frame handling\nI0506 20:27:13.435277 2214 log.go:172] (0xc000a456b0) Data frame received for 1\nI0506 20:27:13.435294 2214 log.go:172] (0xc000654fa0) (1) Data frame handling\nI0506 20:27:13.435313 2214 log.go:172] (0xc000654fa0) (1) Data frame sent\nI0506 20:27:13.435337 2214 log.go:172] (0xc000a456b0) (0xc000654fa0) Stream removed, broadcasting: 1\nI0506 20:27:13.435353 2214 log.go:172] (0xc000a456b0) Go away received\nI0506 20:27:13.435731 2214 log.go:172] (0xc000a456b0) (0xc000654fa0) Stream removed, broadcasting: 1\nI0506 20:27:13.435756 2214 log.go:172] (0xc000a456b0) (0xc00061dc20) Stream removed, broadcasting: 3\nI0506 20:27:13.435764 2214 log.go:172] (0xc000a456b0) (0xc0005edcc0) Stream removed, broadcasting: 5\n" May 6 20:27:13.440: INFO: stdout: "" May 6 20:27:13.440: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8188 execpod-affinityxqmj6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31915/ ; done' May 6 20:27:13.740: INFO: stderr: "I0506 20:27:13.560728 2234 log.go:172] (0xc000ae8000) (0xc00060b040) Create stream\nI0506 20:27:13.560776 2234 log.go:172] (0xc000ae8000) (0xc00060b040) Stream added, broadcasting: 1\nI0506 20:27:13.562733 2234 log.go:172] (0xc000ae8000) Reply frame received for 1\nI0506 20:27:13.562784 2234 log.go:172] (0xc000ae8000) (0xc0005401e0) Create stream\nI0506 20:27:13.562802 2234 log.go:172] (0xc000ae8000) (0xc0005401e0) Stream added, broadcasting: 3\nI0506 20:27:13.563795 2234 log.go:172] (0xc000ae8000) Reply frame received for 3\nI0506 20:27:13.563812 2234 log.go:172] (0xc000ae8000) (0xc0006a85a0) Create stream\nI0506 20:27:13.563820 2234 log.go:172] (0xc000ae8000) (0xc0006a85a0) Stream added, broadcasting: 5\nI0506 20:27:13.564749 2234 log.go:172] (0xc000ae8000) Reply frame received for 5\nI0506 20:27:13.633913 2234 log.go:172] (0xc000ae8000) Data frame received for 5\nI0506 20:27:13.633946 2234 log.go:172] (0xc0006a85a0) (5) Data frame handling\nI0506 20:27:13.633969 2234 log.go:172] (0xc0006a85a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31915/\nI0506 20:27:13.633995 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.634033 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.634051 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.642258 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.642285 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.642311 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.642696 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.642725 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.642740 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.642758 2234 log.go:172] (0xc000ae8000) Data frame received for 5\nI0506 20:27:13.642769 2234 log.go:172] (0xc0006a85a0) (5) Data frame handling\nI0506 20:27:13.642781 2234 log.go:172] (0xc0006a85a0) (5) Data frame sent\n+ echo\nI0506 20:27:13.642793 2234 log.go:172] (0xc000ae8000) Data frame received for 5\nI0506 20:27:13.642826 2234 log.go:172] (0xc0006a85a0) (5) Data frame handling\nI0506 20:27:13.642843 2234 log.go:172] (0xc0006a85a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31915/\nI0506 20:27:13.649589 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.649618 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.649638 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.650146 2234 log.go:172] (0xc000ae8000) Data frame received for 5\nI0506 20:27:13.650161 2234 log.go:172] (0xc0006a85a0) (5) Data frame handling\nI0506 20:27:13.650190 2234 log.go:172] (0xc0006a85a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31915/\nI0506 20:27:13.650219 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.650240 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.650262 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.654180 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.654197 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.654229 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.654566 2234 log.go:172] (0xc000ae8000) Data frame received for 5\nI0506 20:27:13.654586 2234 log.go:172] (0xc0006a85a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31915/\nI0506 20:27:13.654609 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.654644 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.654667 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.654688 2234 log.go:172] (0xc0006a85a0) (5) Data frame sent\nI0506 20:27:13.659529 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.659560 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.659587 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.659839 2234 log.go:172] (0xc000ae8000) Data frame received for 5\nI0506 20:27:13.659856 2234 log.go:172] (0xc0006a85a0) (5) Data frame handling\nI0506 20:27:13.659867 2234 log.go:172] (0xc0006a85a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31915/\nI0506 20:27:13.659960 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.659989 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.660010 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.666248 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.666270 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.666280 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.666900 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.666924 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.666954 2234 log.go:172] (0xc000ae8000) Data frame received for 5\nI0506 20:27:13.666984 2234 log.go:172] (0xc0006a85a0) (5) Data frame handling\nI0506 20:27:13.666997 2234 log.go:172] (0xc0006a85a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31915/\nI0506 20:27:13.667012 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.673544 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.673572 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.673594 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.674141 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.674171 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.674197 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.674217 2234 log.go:172] (0xc000ae8000) Data frame received for 5\nI0506 20:27:13.674230 2234 log.go:172] (0xc0006a85a0) (5) Data frame handling\nI0506 20:27:13.674247 2234 log.go:172] (0xc0006a85a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31915/\nI0506 20:27:13.679221 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.679240 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.679260 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.679720 2234 log.go:172] (0xc000ae8000) Data frame received for 5\nI0506 20:27:13.679734 2234 log.go:172] (0xc0006a85a0) (5) Data frame handling\nI0506 20:27:13.679747 2234 log.go:172] (0xc0006a85a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31915/\nI0506 20:27:13.679865 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.679892 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.679918 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.685336 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.685350 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.685359 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.685934 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.685959 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.685992 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.686015 2234 log.go:172] (0xc000ae8000) Data frame received for 5\nI0506 20:27:13.686023 2234 log.go:172] (0xc0006a85a0) (5) Data frame handling\nI0506 20:27:13.686031 2234 log.go:172] (0xc0006a85a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31915/\nI0506 20:27:13.690570 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.690586 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.690592 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.691479 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.691491 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.691507 2234 log.go:172] (0xc000ae8000) Data frame received for 5\nI0506 20:27:13.691531 2234 log.go:172] (0xc0006a85a0) (5) Data frame handling\nI0506 20:27:13.691543 2234 log.go:172] (0xc0006a85a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31915/\nI0506 20:27:13.691560 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.698501 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.698527 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.698552 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.699311 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.699359 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.699390 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.699430 2234 log.go:172] (0xc000ae8000) Data frame received for 5\nI0506 20:27:13.699453 2234 log.go:172] (0xc0006a85a0) (5) Data frame handling\nI0506 20:27:13.699470 2234 log.go:172] (0xc0006a85a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31915/\nI0506 20:27:13.703687 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.703717 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.703734 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.704176 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.704194 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.704225 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.704405 2234 log.go:172] (0xc000ae8000) Data frame received for 5\nI0506 20:27:13.704429 2234 log.go:172] (0xc0006a85a0) (5) Data frame handling\nI0506 20:27:13.704446 2234 log.go:172] (0xc0006a85a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31915/\nI0506 20:27:13.711159 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.711179 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.711210 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.711804 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.711830 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.711841 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.711857 2234 log.go:172] (0xc000ae8000) Data frame received for 5\nI0506 20:27:13.711866 2234 log.go:172] (0xc0006a85a0) (5) Data frame handling\nI0506 20:27:13.711882 2234 log.go:172] (0xc0006a85a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31915/\nI0506 20:27:13.716367 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.716398 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.716415 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.716791 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.716803 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.716809 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.716829 2234 log.go:172] (0xc000ae8000) Data frame received for 5\nI0506 20:27:13.716858 2234 log.go:172] (0xc0006a85a0) (5) Data frame handling\nI0506 20:27:13.716890 2234 log.go:172] (0xc0006a85a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31915/\nI0506 20:27:13.722533 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.722553 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.722570 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.723077 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.723105 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.723119 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.723139 2234 log.go:172] (0xc000ae8000) Data frame received for 5\nI0506 20:27:13.723152 2234 log.go:172] (0xc0006a85a0) (5) Data frame handling\nI0506 20:27:13.723166 2234 log.go:172] (0xc0006a85a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31915/\nI0506 20:27:13.727150 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.727163 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.727169 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.727545 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.727562 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.727579 2234 log.go:172] (0xc000ae8000) Data frame received for 5\nI0506 20:27:13.727626 2234 log.go:172] (0xc0006a85a0) (5) Data frame handling\nI0506 20:27:13.727642 2234 log.go:172] (0xc0006a85a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31915/\nI0506 20:27:13.727661 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.731840 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.731854 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.731862 2234 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0506 20:27:13.732525 2234 log.go:172] (0xc000ae8000) Data frame received for 3\nI0506 20:27:13.732554 2234 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0506 20:27:13.732588 2234 log.go:172] (0xc000ae8000) Data frame received for 5\nI0506 20:27:13.732620 2234 log.go:172] (0xc0006a85a0) (5) Data frame handling\nI0506 20:27:13.734692 2234 log.go:172] (0xc000ae8000) Data frame received for 1\nI0506 20:27:13.734716 2234 log.go:172] (0xc00060b040) (1) Data frame handling\nI0506 20:27:13.734733 2234 log.go:172] (0xc00060b040) (1) Data frame sent\nI0506 20:27:13.734764 2234 log.go:172] (0xc000ae8000) (0xc00060b040) Stream removed, broadcasting: 1\nI0506 20:27:13.734859 2234 log.go:172] (0xc000ae8000) Go away received\nI0506 20:27:13.735134 2234 log.go:172] (0xc000ae8000) (0xc00060b040) Stream removed, broadcasting: 1\nI0506 20:27:13.735158 2234 log.go:172] (0xc000ae8000) (0xc0005401e0) Stream removed, broadcasting: 3\nI0506 20:27:13.735180 2234 log.go:172] (0xc000ae8000) (0xc0006a85a0) Stream removed, broadcasting: 5\n" May 6 20:27:13.741: INFO: stdout: "\naffinity-nodeport-pr9js\naffinity-nodeport-pr9js\naffinity-nodeport-pr9js\naffinity-nodeport-pr9js\naffinity-nodeport-pr9js\naffinity-nodeport-pr9js\naffinity-nodeport-pr9js\naffinity-nodeport-pr9js\naffinity-nodeport-pr9js\naffinity-nodeport-pr9js\naffinity-nodeport-pr9js\naffinity-nodeport-pr9js\naffinity-nodeport-pr9js\naffinity-nodeport-pr9js\naffinity-nodeport-pr9js\naffinity-nodeport-pr9js" May 6 20:27:13.741: INFO: Received response from host: May 6 20:27:13.741: INFO: Received response from host: affinity-nodeport-pr9js May 6 20:27:13.741: INFO: Received response from host: affinity-nodeport-pr9js May 6 20:27:13.741: INFO: Received response from host: affinity-nodeport-pr9js May 6 20:27:13.741: INFO: Received response from host: affinity-nodeport-pr9js May 6 20:27:13.741: INFO: Received response from host: affinity-nodeport-pr9js May 6 20:27:13.741: INFO: Received response from host: affinity-nodeport-pr9js May 6 20:27:13.741: INFO: Received response from host: affinity-nodeport-pr9js May 6 20:27:13.741: INFO: Received response from host: affinity-nodeport-pr9js May 6 20:27:13.741: INFO: Received response from host: affinity-nodeport-pr9js May 6 20:27:13.741: INFO: Received response from host: affinity-nodeport-pr9js May 6 20:27:13.741: INFO: Received response from host: affinity-nodeport-pr9js May 6 20:27:13.741: INFO: Received response from host: affinity-nodeport-pr9js May 6 20:27:13.741: INFO: Received response from host: affinity-nodeport-pr9js May 6 20:27:13.742: INFO: Received response from host: affinity-nodeport-pr9js May 6 20:27:13.742: INFO: Received response from host: affinity-nodeport-pr9js May 6 20:27:13.742: INFO: Received response from host: affinity-nodeport-pr9js May 6 20:27:13.742: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-8188, will wait for the garbage collector to delete the pods May 6 20:27:14.979: INFO: Deleting ReplicationController affinity-nodeport took: 466.954166ms May 6 20:27:15.680: INFO: Terminating ReplicationController affinity-nodeport pods took: 700.199852ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:27:35.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8188" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:41.093 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":120,"skipped":2084,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:27:35.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 6 20:27:35.424: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75ea7e37-c8bd-4d41-a453-36bc872429a2" in namespace "downward-api-8011" to be "Succeeded or Failed" May 6 20:27:35.486: INFO: Pod "downwardapi-volume-75ea7e37-c8bd-4d41-a453-36bc872429a2": Phase="Pending", Reason="", readiness=false. Elapsed: 61.743201ms May 6 20:27:37.619: INFO: Pod "downwardapi-volume-75ea7e37-c8bd-4d41-a453-36bc872429a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194938431s May 6 20:27:39.624: INFO: Pod "downwardapi-volume-75ea7e37-c8bd-4d41-a453-36bc872429a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.199581767s STEP: Saw pod success May 6 20:27:39.624: INFO: Pod "downwardapi-volume-75ea7e37-c8bd-4d41-a453-36bc872429a2" satisfied condition "Succeeded or Failed" May 6 20:27:39.627: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-75ea7e37-c8bd-4d41-a453-36bc872429a2 container client-container: STEP: delete the pod May 6 20:27:40.021: INFO: Waiting for pod downwardapi-volume-75ea7e37-c8bd-4d41-a453-36bc872429a2 to disappear May 6 20:27:40.032: INFO: Pod downwardapi-volume-75ea7e37-c8bd-4d41-a453-36bc872429a2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:27:40.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8011" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":121,"skipped":2092,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:27:40.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 6 20:27:40.192: INFO: Created pod &Pod{ObjectMeta:{dns-4948 dns-4948 /api/v1/namespaces/dns-4948/pods/dns-4948 8e2dbc20-8c3f-404a-854b-80a2589d3ed5 2091142 0 2020-05-06 20:27:40 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-06 20:27:40 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2454,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2454,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2454,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 20:27:40.194: INFO: The status of Pod dns-4948 is Pending, waiting for it to be Running (with Ready = true) May 6 20:27:42.680: INFO: The status of Pod dns-4948 is Pending, waiting for it to be Running (with Ready = true) May 6 20:27:44.294: INFO: The status of Pod dns-4948 is Pending, waiting for it to be Running (with Ready = true) May 6 20:27:46.261: INFO: The status of Pod dns-4948 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 6 20:27:46.261: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4948 PodName:dns-4948 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:27:46.261: INFO: >>> kubeConfig: /root/.kube/config I0506 20:27:46.439542 7 log.go:172] (0xc00242afd0) (0xc000c7ad20) Create stream I0506 20:27:46.439567 7 log.go:172] (0xc00242afd0) (0xc000c7ad20) Stream added, broadcasting: 1 I0506 20:27:46.441085 7 log.go:172] (0xc00242afd0) Reply frame received for 1 I0506 20:27:46.441219 7 log.go:172] (0xc00242afd0) (0xc001334b40) Create stream I0506 20:27:46.441229 7 log.go:172] (0xc00242afd0) (0xc001334b40) Stream added, broadcasting: 3 I0506 20:27:46.441969 7 log.go:172] (0xc00242afd0) Reply frame received for 3 I0506 20:27:46.441998 7 log.go:172] (0xc00242afd0) (0xc001426460) Create stream I0506 20:27:46.442007 7 log.go:172] (0xc00242afd0) (0xc001426460) Stream added, broadcasting: 5 I0506 20:27:46.442615 7 log.go:172] (0xc00242afd0) Reply frame received for 5 I0506 20:27:46.516727 7 log.go:172] (0xc00242afd0) Data frame received for 3 I0506 20:27:46.516760 7 log.go:172] (0xc001334b40) (3) Data frame handling I0506 20:27:46.516784 7 log.go:172] (0xc001334b40) (3) Data frame sent I0506 20:27:46.520080 7 log.go:172] (0xc00242afd0) Data frame received for 3 I0506 20:27:46.520105 7 log.go:172] (0xc00242afd0) Data frame received for 5 I0506 20:27:46.520129 7 log.go:172] (0xc001426460) (5) Data frame handling I0506 20:27:46.520173 7 log.go:172] (0xc001334b40) (3) Data frame handling I0506 20:27:46.522010 7 log.go:172] (0xc00242afd0) Data frame received for 1 I0506 20:27:46.522042 7 log.go:172] (0xc000c7ad20) (1) Data frame handling I0506 20:27:46.522070 7 log.go:172] (0xc000c7ad20) (1) Data frame sent I0506 20:27:46.522090 7 log.go:172] (0xc00242afd0) (0xc000c7ad20) Stream removed, broadcasting: 1 I0506 20:27:46.522114 7 log.go:172] (0xc00242afd0) Go away received I0506 20:27:46.522268 7 log.go:172] (0xc00242afd0) (0xc000c7ad20) Stream removed, broadcasting: 1 I0506 20:27:46.522294 7 log.go:172] (0xc00242afd0) (0xc001334b40) Stream removed, broadcasting: 3 I0506 20:27:46.522309 7 log.go:172] (0xc00242afd0) (0xc001426460) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 6 20:27:46.522: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4948 PodName:dns-4948 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:27:46.522: INFO: >>> kubeConfig: /root/.kube/config I0506 20:27:46.820911 7 log.go:172] (0xc00222a420) (0xc0013355e0) Create stream I0506 20:27:46.820954 7 log.go:172] (0xc00222a420) (0xc0013355e0) Stream added, broadcasting: 1 I0506 20:27:46.823304 7 log.go:172] (0xc00222a420) Reply frame received for 1 I0506 20:27:46.823361 7 log.go:172] (0xc00222a420) (0xc002a106e0) Create stream I0506 20:27:46.823378 7 log.go:172] (0xc00222a420) (0xc002a106e0) Stream added, broadcasting: 3 I0506 20:27:46.824641 7 log.go:172] (0xc00222a420) Reply frame received for 3 I0506 20:27:46.824680 7 log.go:172] (0xc00222a420) (0xc001426640) Create stream I0506 20:27:46.824694 7 log.go:172] (0xc00222a420) (0xc001426640) Stream added, broadcasting: 5 I0506 20:27:46.825843 7 log.go:172] (0xc00222a420) Reply frame received for 5 I0506 20:27:46.901739 7 log.go:172] (0xc00222a420) Data frame received for 3 I0506 20:27:46.901781 7 log.go:172] (0xc002a106e0) (3) Data frame handling I0506 20:27:46.901801 7 log.go:172] (0xc002a106e0) (3) Data frame sent I0506 20:27:46.904042 7 log.go:172] (0xc00222a420) Data frame received for 3 I0506 20:27:46.904073 7 log.go:172] (0xc002a106e0) (3) Data frame handling I0506 20:27:46.904376 7 log.go:172] (0xc00222a420) Data frame received for 5 I0506 20:27:46.904388 7 log.go:172] (0xc001426640) (5) Data frame handling I0506 20:27:46.906674 7 log.go:172] (0xc00222a420) Data frame received for 1 I0506 20:27:46.906694 7 log.go:172] (0xc0013355e0) (1) Data frame handling I0506 20:27:46.906706 7 log.go:172] (0xc0013355e0) (1) Data frame sent I0506 20:27:46.906723 7 log.go:172] (0xc00222a420) (0xc0013355e0) Stream removed, broadcasting: 1 I0506 20:27:46.906741 7 log.go:172] (0xc00222a420) Go away received I0506 20:27:46.906832 7 log.go:172] (0xc00222a420) (0xc0013355e0) Stream removed, broadcasting: 1 I0506 20:27:46.906846 7 log.go:172] (0xc00222a420) (0xc002a106e0) Stream removed, broadcasting: 3 I0506 20:27:46.906856 7 log.go:172] (0xc00222a420) (0xc001426640) Stream removed, broadcasting: 5 May 6 20:27:46.906: INFO: Deleting pod dns-4948... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:27:47.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4948" for this suite. • [SLOW TEST:7.861 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":288,"completed":122,"skipped":2123,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:27:47.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-4123da82-bd72-4ea6-84e6-cd0535fbe81e STEP: Creating a pod to test consume secrets May 6 20:27:48.680: INFO: Waiting up to 5m0s for pod "pod-secrets-075c5d3e-fa50-4340-a079-3fbcbfa8cefd" in namespace "secrets-5938" to be "Succeeded or Failed" May 6 20:27:48.684: INFO: Pod "pod-secrets-075c5d3e-fa50-4340-a079-3fbcbfa8cefd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.738969ms May 6 20:27:50.709: INFO: Pod "pod-secrets-075c5d3e-fa50-4340-a079-3fbcbfa8cefd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02851733s May 6 20:27:52.712: INFO: Pod "pod-secrets-075c5d3e-fa50-4340-a079-3fbcbfa8cefd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032027442s May 6 20:27:55.048: INFO: Pod "pod-secrets-075c5d3e-fa50-4340-a079-3fbcbfa8cefd": Phase="Running", Reason="", readiness=true. Elapsed: 6.36760418s May 6 20:27:57.116: INFO: Pod "pod-secrets-075c5d3e-fa50-4340-a079-3fbcbfa8cefd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.435938558s STEP: Saw pod success May 6 20:27:57.116: INFO: Pod "pod-secrets-075c5d3e-fa50-4340-a079-3fbcbfa8cefd" satisfied condition "Succeeded or Failed" May 6 20:27:57.118: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-075c5d3e-fa50-4340-a079-3fbcbfa8cefd container secret-env-test: STEP: delete the pod May 6 20:27:57.298: INFO: Waiting for pod pod-secrets-075c5d3e-fa50-4340-a079-3fbcbfa8cefd to disappear May 6 20:27:57.320: INFO: Pod pod-secrets-075c5d3e-fa50-4340-a079-3fbcbfa8cefd no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:27:57.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5938" for this suite. • [SLOW TEST:9.425 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":288,"completed":123,"skipped":2167,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:27:57.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 6 20:27:57.625: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:28:14.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3073" for this suite. • [SLOW TEST:17.665 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":288,"completed":124,"skipped":2206,"failed":0} SS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:28:14.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:28:15.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5461" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":288,"completed":125,"skipped":2208,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:28:15.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 6 20:28:15.639: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ea03b190-2f8e-4b63-b111-cdf59ac1f73a" in namespace "downward-api-6375" to be "Succeeded or Failed" May 6 20:28:15.680: INFO: Pod "downwardapi-volume-ea03b190-2f8e-4b63-b111-cdf59ac1f73a": Phase="Pending", Reason="", readiness=false. Elapsed: 40.762931ms May 6 20:28:17.752: INFO: Pod "downwardapi-volume-ea03b190-2f8e-4b63-b111-cdf59ac1f73a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112453529s May 6 20:28:19.817: INFO: Pod "downwardapi-volume-ea03b190-2f8e-4b63-b111-cdf59ac1f73a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177935825s May 6 20:28:21.884: INFO: Pod "downwardapi-volume-ea03b190-2f8e-4b63-b111-cdf59ac1f73a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.244880906s STEP: Saw pod success May 6 20:28:21.884: INFO: Pod "downwardapi-volume-ea03b190-2f8e-4b63-b111-cdf59ac1f73a" satisfied condition "Succeeded or Failed" May 6 20:28:21.887: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-ea03b190-2f8e-4b63-b111-cdf59ac1f73a container client-container: STEP: delete the pod May 6 20:28:21.974: INFO: Waiting for pod downwardapi-volume-ea03b190-2f8e-4b63-b111-cdf59ac1f73a to disappear May 6 20:28:22.020: INFO: Pod downwardapi-volume-ea03b190-2f8e-4b63-b111-cdf59ac1f73a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:28:22.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6375" for this suite. • [SLOW TEST:6.579 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":126,"skipped":2210,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:28:22.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9003, will wait for the garbage collector to delete the pods May 6 20:28:30.759: INFO: Deleting Job.batch foo took: 342.926475ms May 6 20:28:31.259: INFO: Terminating Job.batch foo pods took: 500.217562ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:29:15.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9003" for this suite. • [SLOW TEST:53.386 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":288,"completed":127,"skipped":2221,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:29:15.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 6 20:29:15.599: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:29:15.614: INFO: Number of nodes with available pods: 0 May 6 20:29:15.614: INFO: Node latest-worker is running more than one daemon pod May 6 20:29:16.620: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:29:16.624: INFO: Number of nodes with available pods: 0 May 6 20:29:16.624: INFO: Node latest-worker is running more than one daemon pod May 6 20:29:17.620: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:29:17.624: INFO: Number of nodes with available pods: 0 May 6 20:29:17.624: INFO: Node latest-worker is running more than one daemon pod May 6 20:29:18.620: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:29:18.623: INFO: Number of nodes with available pods: 0 May 6 20:29:18.623: INFO: Node latest-worker is running more than one daemon pod May 6 20:29:19.656: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:29:19.669: INFO: Number of nodes with available pods: 0 May 6 20:29:19.669: INFO: Node latest-worker is running more than one daemon pod May 6 20:29:20.635: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:29:20.640: INFO: Number of nodes with available pods: 2 May 6 20:29:20.640: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 6 20:29:20.674: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:29:20.682: INFO: Number of nodes with available pods: 1 May 6 20:29:20.682: INFO: Node latest-worker is running more than one daemon pod May 6 20:29:21.688: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:29:21.692: INFO: Number of nodes with available pods: 1 May 6 20:29:21.692: INFO: Node latest-worker is running more than one daemon pod May 6 20:29:22.688: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:29:22.692: INFO: Number of nodes with available pods: 1 May 6 20:29:22.692: INFO: Node latest-worker is running more than one daemon pod May 6 20:29:23.688: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:29:23.692: INFO: Number of nodes with available pods: 1 May 6 20:29:23.692: INFO: Node latest-worker is running more than one daemon pod May 6 20:29:24.688: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:29:24.692: INFO: Number of nodes with available pods: 1 May 6 20:29:24.692: INFO: Node latest-worker is running more than one daemon pod May 6 20:29:25.687: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:29:25.690: INFO: Number of nodes with available pods: 1 May 6 20:29:25.690: INFO: Node latest-worker is running more than one daemon pod May 6 20:29:26.687: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:29:26.691: INFO: Number of nodes with available pods: 1 May 6 20:29:26.691: INFO: Node latest-worker is running more than one daemon pod May 6 20:29:27.688: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:29:27.691: INFO: Number of nodes with available pods: 1 May 6 20:29:27.691: INFO: Node latest-worker is running more than one daemon pod May 6 20:29:28.688: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:29:28.691: INFO: Number of nodes with available pods: 2 May 6 20:29:28.692: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2866, will wait for the garbage collector to delete the pods May 6 20:29:28.754: INFO: Deleting DaemonSet.extensions daemon-set took: 6.574316ms May 6 20:29:29.054: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.302787ms May 6 20:29:33.844: INFO: Number of nodes with available pods: 0 May 6 20:29:33.844: INFO: Number of running nodes: 0, number of available pods: 0 May 6 20:29:33.850: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2866/daemonsets","resourceVersion":"2091693"},"items":null} May 6 20:29:33.873: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2866/pods","resourceVersion":"2091695"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:29:34.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2866" for this suite. • [SLOW TEST:18.623 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":288,"completed":128,"skipped":2239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:29:34.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-b54a5855-c112-4dbd-b49c-39ce5676b702 STEP: Creating a pod to test consume configMaps May 6 20:29:34.580: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8b7ac131-89ad-47e0-af3b-f60f507d312f" in namespace "projected-8304" to be "Succeeded or Failed" May 6 20:29:34.598: INFO: Pod "pod-projected-configmaps-8b7ac131-89ad-47e0-af3b-f60f507d312f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.195387ms May 6 20:29:36.602: INFO: Pod "pod-projected-configmaps-8b7ac131-89ad-47e0-af3b-f60f507d312f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022052262s May 6 20:29:38.883: INFO: Pod "pod-projected-configmaps-8b7ac131-89ad-47e0-af3b-f60f507d312f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.303218361s May 6 20:29:40.962: INFO: Pod "pod-projected-configmaps-8b7ac131-89ad-47e0-af3b-f60f507d312f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.38150395s May 6 20:29:43.295: INFO: Pod "pod-projected-configmaps-8b7ac131-89ad-47e0-af3b-f60f507d312f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.715128358s STEP: Saw pod success May 6 20:29:43.295: INFO: Pod "pod-projected-configmaps-8b7ac131-89ad-47e0-af3b-f60f507d312f" satisfied condition "Succeeded or Failed" May 6 20:29:43.513: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-8b7ac131-89ad-47e0-af3b-f60f507d312f container projected-configmap-volume-test: STEP: delete the pod May 6 20:29:43.687: INFO: Waiting for pod pod-projected-configmaps-8b7ac131-89ad-47e0-af3b-f60f507d312f to disappear May 6 20:29:43.712: INFO: Pod pod-projected-configmaps-8b7ac131-89ad-47e0-af3b-f60f507d312f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:29:43.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8304" for this suite. • [SLOW TEST:9.684 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":129,"skipped":2263,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:29:43.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium May 6 20:29:44.338: INFO: Waiting up to 5m0s for pod "pod-7dc6c73b-502e-42ba-bb7f-0802f793fe7e" in namespace "emptydir-542" to be "Succeeded or Failed" May 6 20:29:44.476: INFO: Pod "pod-7dc6c73b-502e-42ba-bb7f-0802f793fe7e": Phase="Pending", Reason="", readiness=false. Elapsed: 137.883598ms May 6 20:29:46.481: INFO: Pod "pod-7dc6c73b-502e-42ba-bb7f-0802f793fe7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142474545s May 6 20:29:48.534: INFO: Pod "pod-7dc6c73b-502e-42ba-bb7f-0802f793fe7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195757517s May 6 20:29:50.539: INFO: Pod "pod-7dc6c73b-502e-42ba-bb7f-0802f793fe7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.200241492s STEP: Saw pod success May 6 20:29:50.539: INFO: Pod "pod-7dc6c73b-502e-42ba-bb7f-0802f793fe7e" satisfied condition "Succeeded or Failed" May 6 20:29:50.541: INFO: Trying to get logs from node latest-worker pod pod-7dc6c73b-502e-42ba-bb7f-0802f793fe7e container test-container: STEP: delete the pod May 6 20:29:50.603: INFO: Waiting for pod pod-7dc6c73b-502e-42ba-bb7f-0802f793fe7e to disappear May 6 20:29:50.622: INFO: Pod pod-7dc6c73b-502e-42ba-bb7f-0802f793fe7e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:29:50.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-542" for this suite. • [SLOW TEST:6.947 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":130,"skipped":2275,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:29:50.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:30:08.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7495" for this suite. • [SLOW TEST:17.485 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":288,"completed":131,"skipped":2287,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:30:08.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:30:12.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2652" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":132,"skipped":2292,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:30:12.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-217c487e-ca75-4090-b34a-d32d2c18ff9a [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:30:13.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-139" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":288,"completed":133,"skipped":2328,"failed":0} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:30:13.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-22924ee6-d36d-429f-81e8-452a470eae1c STEP: Creating a pod to test consume configMaps May 6 20:30:15.493: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-421e0465-9d39-4a03-b2d2-60d7d8555b43" in namespace "projected-6915" to be "Succeeded or Failed" May 6 20:30:16.087: INFO: Pod "pod-projected-configmaps-421e0465-9d39-4a03-b2d2-60d7d8555b43": Phase="Pending", Reason="", readiness=false. Elapsed: 594.751625ms May 6 20:30:18.114: INFO: Pod "pod-projected-configmaps-421e0465-9d39-4a03-b2d2-60d7d8555b43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.621259473s May 6 20:30:20.424: INFO: Pod "pod-projected-configmaps-421e0465-9d39-4a03-b2d2-60d7d8555b43": Phase="Pending", Reason="", readiness=false. Elapsed: 4.931192617s May 6 20:30:22.483: INFO: Pod "pod-projected-configmaps-421e0465-9d39-4a03-b2d2-60d7d8555b43": Phase="Running", Reason="", readiness=true. Elapsed: 6.990156942s May 6 20:30:24.522: INFO: Pod "pod-projected-configmaps-421e0465-9d39-4a03-b2d2-60d7d8555b43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.029798749s STEP: Saw pod success May 6 20:30:24.522: INFO: Pod "pod-projected-configmaps-421e0465-9d39-4a03-b2d2-60d7d8555b43" satisfied condition "Succeeded or Failed" May 6 20:30:24.530: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-421e0465-9d39-4a03-b2d2-60d7d8555b43 container projected-configmap-volume-test: STEP: delete the pod May 6 20:30:24.685: INFO: Waiting for pod pod-projected-configmaps-421e0465-9d39-4a03-b2d2-60d7d8555b43 to disappear May 6 20:30:24.716: INFO: Pod pod-projected-configmaps-421e0465-9d39-4a03-b2d2-60d7d8555b43 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:30:24.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6915" for this suite. • [SLOW TEST:11.391 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":134,"skipped":2328,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:30:24.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 6 20:30:24.794: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a169a325-9407-464d-b95b-0f2925bc7620" in namespace "projected-5453" to be "Succeeded or Failed" May 6 20:30:24.816: INFO: Pod "downwardapi-volume-a169a325-9407-464d-b95b-0f2925bc7620": Phase="Pending", Reason="", readiness=false. Elapsed: 21.825972ms May 6 20:30:26.848: INFO: Pod "downwardapi-volume-a169a325-9407-464d-b95b-0f2925bc7620": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054206154s May 6 20:30:28.920: INFO: Pod "downwardapi-volume-a169a325-9407-464d-b95b-0f2925bc7620": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126753822s May 6 20:30:30.924: INFO: Pod "downwardapi-volume-a169a325-9407-464d-b95b-0f2925bc7620": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.130686062s STEP: Saw pod success May 6 20:30:30.924: INFO: Pod "downwardapi-volume-a169a325-9407-464d-b95b-0f2925bc7620" satisfied condition "Succeeded or Failed" May 6 20:30:30.927: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a169a325-9407-464d-b95b-0f2925bc7620 container client-container: STEP: delete the pod May 6 20:30:31.034: INFO: Waiting for pod downwardapi-volume-a169a325-9407-464d-b95b-0f2925bc7620 to disappear May 6 20:30:31.055: INFO: Pod downwardapi-volume-a169a325-9407-464d-b95b-0f2925bc7620 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:30:31.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5453" for this suite. • [SLOW TEST:6.340 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":135,"skipped":2332,"failed":0} SSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:30:31.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 20:30:31.205: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-8bae70a4-782f-4a27-af89-f98db02002cc" in namespace "security-context-test-2556" to be "Succeeded or Failed" May 6 20:30:31.217: INFO: Pod "busybox-readonly-false-8bae70a4-782f-4a27-af89-f98db02002cc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.945574ms May 6 20:30:33.277: INFO: Pod "busybox-readonly-false-8bae70a4-782f-4a27-af89-f98db02002cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072553731s May 6 20:30:35.282: INFO: Pod "busybox-readonly-false-8bae70a4-782f-4a27-af89-f98db02002cc": Phase="Running", Reason="", readiness=true. Elapsed: 4.076669882s May 6 20:30:37.288: INFO: Pod "busybox-readonly-false-8bae70a4-782f-4a27-af89-f98db02002cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.083062351s May 6 20:30:37.288: INFO: Pod "busybox-readonly-false-8bae70a4-782f-4a27-af89-f98db02002cc" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:30:37.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2556" for this suite. • [SLOW TEST:6.235 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":288,"completed":136,"skipped":2336,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:30:37.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3066 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 6 20:30:37.588: INFO: Found 0 stateful pods, waiting for 3 May 6 20:30:47.592: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 20:30:47.592: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 20:30:47.592: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 6 20:30:57.592: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 20:30:57.592: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 20:30:57.592: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 6 20:30:57.621: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 6 20:31:07.679: INFO: Updating stateful set ss2 May 6 20:31:08.328: INFO: Waiting for Pod statefulset-3066/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 6 20:31:19.777: INFO: Found 2 stateful pods, waiting for 3 May 6 20:31:29.898: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 20:31:29.898: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 20:31:29.898: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 6 20:31:39.782: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 20:31:39.782: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 20:31:39.782: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 6 20:31:39.808: INFO: Updating stateful set ss2 May 6 20:31:40.106: INFO: Waiting for Pod statefulset-3066/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 6 20:31:50.151: INFO: Updating stateful set ss2 May 6 20:31:50.328: INFO: Waiting for StatefulSet statefulset-3066/ss2 to complete update May 6 20:31:50.329: INFO: Waiting for Pod statefulset-3066/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 6 20:32:00.338: INFO: Deleting all statefulset in ns statefulset-3066 May 6 20:32:00.341: INFO: Scaling statefulset ss2 to 0 May 6 20:32:20.501: INFO: Waiting for statefulset status.replicas updated to 0 May 6 20:32:20.504: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:32:20.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3066" for this suite. • [SLOW TEST:103.238 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":288,"completed":137,"skipped":2349,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:32:20.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:32:38.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6280" for this suite. • [SLOW TEST:17.562 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":288,"completed":138,"skipped":2376,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:32:38.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 6 20:32:38.194: INFO: Waiting up to 5m0s for pod "downward-api-bcf76cd4-e115-4464-b45c-b9aa1dfa8b4e" in namespace "downward-api-695" to be "Succeeded or Failed" May 6 20:32:38.234: INFO: Pod "downward-api-bcf76cd4-e115-4464-b45c-b9aa1dfa8b4e": Phase="Pending", Reason="", readiness=false. Elapsed: 40.464394ms May 6 20:32:40.298: INFO: Pod "downward-api-bcf76cd4-e115-4464-b45c-b9aa1dfa8b4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10427021s May 6 20:32:42.303: INFO: Pod "downward-api-bcf76cd4-e115-4464-b45c-b9aa1dfa8b4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109084005s STEP: Saw pod success May 6 20:32:42.303: INFO: Pod "downward-api-bcf76cd4-e115-4464-b45c-b9aa1dfa8b4e" satisfied condition "Succeeded or Failed" May 6 20:32:42.306: INFO: Trying to get logs from node latest-worker pod downward-api-bcf76cd4-e115-4464-b45c-b9aa1dfa8b4e container dapi-container: STEP: delete the pod May 6 20:32:42.432: INFO: Waiting for pod downward-api-bcf76cd4-e115-4464-b45c-b9aa1dfa8b4e to disappear May 6 20:32:42.669: INFO: Pod downward-api-bcf76cd4-e115-4464-b45c-b9aa1dfa8b4e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:32:42.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-695" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":288,"completed":139,"skipped":2382,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:32:42.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:32:47.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7879" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":140,"skipped":2401,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:32:47.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 6 20:32:47.432: INFO: Waiting up to 5m0s for pod "pod-062118e4-c934-4f24-afcf-f02c9d36fb32" in namespace "emptydir-9514" to be "Succeeded or Failed" May 6 20:32:47.457: INFO: Pod "pod-062118e4-c934-4f24-afcf-f02c9d36fb32": Phase="Pending", Reason="", readiness=false. Elapsed: 25.276973ms May 6 20:32:49.461: INFO: Pod "pod-062118e4-c934-4f24-afcf-f02c9d36fb32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029450633s May 6 20:32:51.465: INFO: Pod "pod-062118e4-c934-4f24-afcf-f02c9d36fb32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032916173s STEP: Saw pod success May 6 20:32:51.465: INFO: Pod "pod-062118e4-c934-4f24-afcf-f02c9d36fb32" satisfied condition "Succeeded or Failed" May 6 20:32:51.467: INFO: Trying to get logs from node latest-worker2 pod pod-062118e4-c934-4f24-afcf-f02c9d36fb32 container test-container: STEP: delete the pod May 6 20:32:51.641: INFO: Waiting for pod pod-062118e4-c934-4f24-afcf-f02c9d36fb32 to disappear May 6 20:32:51.652: INFO: Pod pod-062118e4-c934-4f24-afcf-f02c9d36fb32 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:32:51.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9514" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":141,"skipped":2406,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:32:51.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 6 20:32:51.920: INFO: Waiting up to 5m0s for pod "pod-715e3bb6-92d0-4438-aaba-f61cf8630681" in namespace "emptydir-8593" to be "Succeeded or Failed" May 6 20:32:52.036: INFO: Pod "pod-715e3bb6-92d0-4438-aaba-f61cf8630681": Phase="Pending", Reason="", readiness=false. Elapsed: 115.553264ms May 6 20:32:54.119: INFO: Pod "pod-715e3bb6-92d0-4438-aaba-f61cf8630681": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198539704s May 6 20:32:56.130: INFO: Pod "pod-715e3bb6-92d0-4438-aaba-f61cf8630681": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.209406309s STEP: Saw pod success May 6 20:32:56.130: INFO: Pod "pod-715e3bb6-92d0-4438-aaba-f61cf8630681" satisfied condition "Succeeded or Failed" May 6 20:32:56.132: INFO: Trying to get logs from node latest-worker pod pod-715e3bb6-92d0-4438-aaba-f61cf8630681 container test-container: STEP: delete the pod May 6 20:32:56.204: INFO: Waiting for pod pod-715e3bb6-92d0-4438-aaba-f61cf8630681 to disappear May 6 20:32:56.234: INFO: Pod pod-715e3bb6-92d0-4438-aaba-f61cf8630681 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:32:56.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8593" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":142,"skipped":2433,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:32:56.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-lg4c STEP: Creating a pod to test atomic-volume-subpath May 6 20:32:56.844: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-lg4c" in namespace "subpath-7534" to be "Succeeded or Failed" May 6 20:32:56.851: INFO: Pod "pod-subpath-test-downwardapi-lg4c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.401452ms May 6 20:32:58.957: INFO: Pod "pod-subpath-test-downwardapi-lg4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112707725s May 6 20:33:00.960: INFO: Pod "pod-subpath-test-downwardapi-lg4c": Phase="Running", Reason="", readiness=true. Elapsed: 4.116346172s May 6 20:33:02.964: INFO: Pod "pod-subpath-test-downwardapi-lg4c": Phase="Running", Reason="", readiness=true. Elapsed: 6.120309001s May 6 20:33:04.969: INFO: Pod "pod-subpath-test-downwardapi-lg4c": Phase="Running", Reason="", readiness=true. Elapsed: 8.124463387s May 6 20:33:06.973: INFO: Pod "pod-subpath-test-downwardapi-lg4c": Phase="Running", Reason="", readiness=true. Elapsed: 10.129076403s May 6 20:33:08.978: INFO: Pod "pod-subpath-test-downwardapi-lg4c": Phase="Running", Reason="", readiness=true. Elapsed: 12.133630189s May 6 20:33:10.982: INFO: Pod "pod-subpath-test-downwardapi-lg4c": Phase="Running", Reason="", readiness=true. Elapsed: 14.137828585s May 6 20:33:12.989: INFO: Pod "pod-subpath-test-downwardapi-lg4c": Phase="Running", Reason="", readiness=true. Elapsed: 16.14473011s May 6 20:33:14.992: INFO: Pod "pod-subpath-test-downwardapi-lg4c": Phase="Running", Reason="", readiness=true. Elapsed: 18.148149205s May 6 20:33:16.996: INFO: Pod "pod-subpath-test-downwardapi-lg4c": Phase="Running", Reason="", readiness=true. Elapsed: 20.151371127s May 6 20:33:19.000: INFO: Pod "pod-subpath-test-downwardapi-lg4c": Phase="Running", Reason="", readiness=true. Elapsed: 22.155703531s May 6 20:33:21.007: INFO: Pod "pod-subpath-test-downwardapi-lg4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.162765347s STEP: Saw pod success May 6 20:33:21.007: INFO: Pod "pod-subpath-test-downwardapi-lg4c" satisfied condition "Succeeded or Failed" May 6 20:33:21.010: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-lg4c container test-container-subpath-downwardapi-lg4c: STEP: delete the pod May 6 20:33:21.026: INFO: Waiting for pod pod-subpath-test-downwardapi-lg4c to disappear May 6 20:33:21.049: INFO: Pod pod-subpath-test-downwardapi-lg4c no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-lg4c May 6 20:33:21.049: INFO: Deleting pod "pod-subpath-test-downwardapi-lg4c" in namespace "subpath-7534" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:33:21.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7534" for this suite. • [SLOW TEST:24.849 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":288,"completed":143,"skipped":2434,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:33:21.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 6 20:33:28.113: INFO: Successfully updated pod "annotationupdate7a85f001-fd38-457d-b326-021149410e28" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:33:30.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-325" for this suite. • [SLOW TEST:9.383 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":144,"skipped":2462,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:33:30.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 6 20:33:30.960: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 20:33:30.970: INFO: Waiting for terminating namespaces to be deleted... May 6 20:33:30.973: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 6 20:33:30.978: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 6 20:33:30.978: INFO: Container kindnet-cni ready: true, restart count 0 May 6 20:33:30.978: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 6 20:33:30.978: INFO: Container kube-proxy ready: true, restart count 0 May 6 20:33:30.978: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 6 20:33:30.982: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 6 20:33:30.982: INFO: Container kindnet-cni ready: true, restart count 0 May 6 20:33:30.982: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 6 20:33:30.982: INFO: Container kube-proxy ready: true, restart count 0 May 6 20:33:30.982: INFO: busybox-readonly-fs5daa23a0-f9a3-4678-955e-ac3e4fa2e0a0 from kubelet-test-7879 started at 2020-05-06 20:32:43 +0000 UTC (1 container statuses recorded) May 6 20:33:30.982: INFO: Container busybox-readonly-fs5daa23a0-f9a3-4678-955e-ac3e4fa2e0a0 ready: false, restart count 0 May 6 20:33:30.982: INFO: annotationupdate7a85f001-fd38-457d-b326-021149410e28 from projected-325 started at 2020-05-06 20:33:21 +0000 UTC (1 container statuses recorded) May 6 20:33:30.982: INFO: Container client-container ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-37622dc2-a66b-4ba6-8545-a4a6ec5a48d1 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-37622dc2-a66b-4ba6-8545-a4a6ec5a48d1 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-37622dc2-a66b-4ba6-8545-a4a6ec5a48d1 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:33:54.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3529" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:24.475 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":288,"completed":145,"skipped":2466,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:33:54.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-788.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-788.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-788.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-788.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-788.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-788.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-788.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-788.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-788.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-788.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 20:34:05.365: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:05.404: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:05.407: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:05.409: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:05.812: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:05.815: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:05.818: INFO: Unable to read jessie_udp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:05.821: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:05.827: INFO: Lookups using dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local wheezy_udp@dns-test-service-2.dns-788.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-788.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local jessie_udp@dns-test-service-2.dns-788.svc.cluster.local jessie_tcp@dns-test-service-2.dns-788.svc.cluster.local] May 6 20:34:11.090: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:11.094: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:11.165: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:11.169: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:11.177: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:11.179: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:11.182: INFO: Unable to read jessie_udp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:11.184: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:11.320: INFO: Lookups using dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local wheezy_udp@dns-test-service-2.dns-788.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-788.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local jessie_udp@dns-test-service-2.dns-788.svc.cluster.local jessie_tcp@dns-test-service-2.dns-788.svc.cluster.local] May 6 20:34:16.083: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:16.087: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:16.504: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:16.560: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:16.889: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:16.892: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:16.895: INFO: Unable to read jessie_udp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:16.988: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:16.996: INFO: Lookups using dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local wheezy_udp@dns-test-service-2.dns-788.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-788.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local jessie_udp@dns-test-service-2.dns-788.svc.cluster.local jessie_tcp@dns-test-service-2.dns-788.svc.cluster.local] May 6 20:34:20.832: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:20.836: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:20.839: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:20.843: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:20.959: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:20.964: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:20.967: INFO: Unable to read jessie_udp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:20.969: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:20.973: INFO: Lookups using dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local wheezy_udp@dns-test-service-2.dns-788.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-788.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local jessie_udp@dns-test-service-2.dns-788.svc.cluster.local jessie_tcp@dns-test-service-2.dns-788.svc.cluster.local] May 6 20:34:25.880: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:25.884: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:25.887: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:25.890: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:25.932: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:25.936: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:25.938: INFO: Unable to read jessie_udp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:25.941: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:25.947: INFO: Lookups using dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local wheezy_udp@dns-test-service-2.dns-788.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-788.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-788.svc.cluster.local jessie_udp@dns-test-service-2.dns-788.svc.cluster.local jessie_tcp@dns-test-service-2.dns-788.svc.cluster.local] May 6 20:34:30.839: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:30.842: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:30.856: INFO: Unable to read jessie_udp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:30.858: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-788.svc.cluster.local from pod dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a: the server could not find the requested resource (get pods dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a) May 6 20:34:30.864: INFO: Lookups using dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a failed for: [wheezy_udp@dns-test-service-2.dns-788.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-788.svc.cluster.local jessie_udp@dns-test-service-2.dns-788.svc.cluster.local jessie_tcp@dns-test-service-2.dns-788.svc.cluster.local] May 6 20:34:35.872: INFO: DNS probes using dns-788/dns-test-b3aa5a23-67d4-46da-bfba-2075b27c393a succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:34:36.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-788" for this suite. • [SLOW TEST:41.864 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":288,"completed":146,"skipped":2476,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:34:36.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments May 6 20:34:36.951: INFO: Waiting up to 5m0s for pod "client-containers-3db9c62e-ef9d-4656-adbb-d72559d3f52c" in namespace "containers-6068" to be "Succeeded or Failed" May 6 20:34:37.114: INFO: Pod "client-containers-3db9c62e-ef9d-4656-adbb-d72559d3f52c": Phase="Pending", Reason="", readiness=false. Elapsed: 163.534543ms May 6 20:34:39.119: INFO: Pod "client-containers-3db9c62e-ef9d-4656-adbb-d72559d3f52c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167793247s May 6 20:34:41.123: INFO: Pod "client-containers-3db9c62e-ef9d-4656-adbb-d72559d3f52c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171929379s May 6 20:34:43.215: INFO: Pod "client-containers-3db9c62e-ef9d-4656-adbb-d72559d3f52c": Phase="Running", Reason="", readiness=true. Elapsed: 6.263908754s May 6 20:34:45.219: INFO: Pod "client-containers-3db9c62e-ef9d-4656-adbb-d72559d3f52c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.267723774s STEP: Saw pod success May 6 20:34:45.219: INFO: Pod "client-containers-3db9c62e-ef9d-4656-adbb-d72559d3f52c" satisfied condition "Succeeded or Failed" May 6 20:34:45.222: INFO: Trying to get logs from node latest-worker2 pod client-containers-3db9c62e-ef9d-4656-adbb-d72559d3f52c container test-container: STEP: delete the pod May 6 20:34:45.283: INFO: Waiting for pod client-containers-3db9c62e-ef9d-4656-adbb-d72559d3f52c to disappear May 6 20:34:45.290: INFO: Pod client-containers-3db9c62e-ef9d-4656-adbb-d72559d3f52c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:34:45.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6068" for this suite. • [SLOW TEST:8.484 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":288,"completed":147,"skipped":2482,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:34:45.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 6 20:34:45.409: INFO: Waiting up to 5m0s for pod "pod-4dbda1a4-81f5-40ad-9d41-f2f6630c8e29" in namespace "emptydir-2038" to be "Succeeded or Failed" May 6 20:34:45.415: INFO: Pod "pod-4dbda1a4-81f5-40ad-9d41-f2f6630c8e29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.36329ms May 6 20:34:47.544: INFO: Pod "pod-4dbda1a4-81f5-40ad-9d41-f2f6630c8e29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135399052s May 6 20:34:49.547: INFO: Pod "pod-4dbda1a4-81f5-40ad-9d41-f2f6630c8e29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137967937s May 6 20:34:51.552: INFO: Pod "pod-4dbda1a4-81f5-40ad-9d41-f2f6630c8e29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.142583287s STEP: Saw pod success May 6 20:34:51.552: INFO: Pod "pod-4dbda1a4-81f5-40ad-9d41-f2f6630c8e29" satisfied condition "Succeeded or Failed" May 6 20:34:51.555: INFO: Trying to get logs from node latest-worker pod pod-4dbda1a4-81f5-40ad-9d41-f2f6630c8e29 container test-container: STEP: delete the pod May 6 20:34:51.586: INFO: Waiting for pod pod-4dbda1a4-81f5-40ad-9d41-f2f6630c8e29 to disappear May 6 20:34:51.622: INFO: Pod pod-4dbda1a4-81f5-40ad-9d41-f2f6630c8e29 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:34:51.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2038" for this suite. • [SLOW TEST:6.332 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":148,"skipped":2490,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:34:51.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 20:36:51.711: INFO: Deleting pod "var-expansion-525f7633-7f9e-44ef-b1f2-92b395ed0dfe" in namespace "var-expansion-8473" May 6 20:36:51.716: INFO: Wait up to 5m0s for pod "var-expansion-525f7633-7f9e-44ef-b1f2-92b395ed0dfe" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:36:55.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8473" for this suite. • [SLOW TEST:124.286 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":288,"completed":149,"skipped":2495,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:36:55.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:36:56.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4940" for this suite. STEP: Destroying namespace "nspatchtest-e3252dc4-930d-4ca5-95bc-3842d6f6c3de-7937" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":288,"completed":150,"skipped":2498,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:36:56.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 20:36:57.145: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 20:36:59.154: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394217, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394217, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394217, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394216, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 20:37:01.156: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394217, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394217, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394217, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394216, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 20:37:04.245: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:37:04.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8249" for this suite. STEP: Destroying namespace "webhook-8249-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.569 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":288,"completed":151,"skipped":2504,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:37:04.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-d053e3a6-e84c-4a39-bde9-42e4ec70b210 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:37:11.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6848" for this suite. • [SLOW TEST:6.174 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":152,"skipped":2521,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:37:11.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-55481d36-955f-4b30-bc18-14da6e1efd98 STEP: Creating secret with name secret-projected-all-test-volume-41362d6c-b805-401f-8eac-1e92e1bc60d9 STEP: Creating a pod to test Check all projections for projected volume plugin May 6 20:37:11.270: INFO: Waiting up to 5m0s for pod "projected-volume-e3e05fb6-9d94-4760-8ce4-cda37a676c64" in namespace "projected-1500" to be "Succeeded or Failed" May 6 20:37:11.272: INFO: Pod "projected-volume-e3e05fb6-9d94-4760-8ce4-cda37a676c64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232641ms May 6 20:37:13.277: INFO: Pod "projected-volume-e3e05fb6-9d94-4760-8ce4-cda37a676c64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007066255s May 6 20:37:15.281: INFO: Pod "projected-volume-e3e05fb6-9d94-4760-8ce4-cda37a676c64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011119043s STEP: Saw pod success May 6 20:37:15.281: INFO: Pod "projected-volume-e3e05fb6-9d94-4760-8ce4-cda37a676c64" satisfied condition "Succeeded or Failed" May 6 20:37:15.283: INFO: Trying to get logs from node latest-worker pod projected-volume-e3e05fb6-9d94-4760-8ce4-cda37a676c64 container projected-all-volume-test: STEP: delete the pod May 6 20:37:15.469: INFO: Waiting for pod projected-volume-e3e05fb6-9d94-4760-8ce4-cda37a676c64 to disappear May 6 20:37:15.505: INFO: Pod projected-volume-e3e05fb6-9d94-4760-8ce4-cda37a676c64 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:37:15.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1500" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":288,"completed":153,"skipped":2537,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:37:15.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-0bda37d0-c60a-4145-9d90-3a52e2b6043a STEP: Creating a pod to test consume secrets May 6 20:37:15.656: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d5931e57-7cb0-405c-b363-2928a00ffa62" in namespace "projected-7544" to be "Succeeded or Failed" May 6 20:37:15.673: INFO: Pod "pod-projected-secrets-d5931e57-7cb0-405c-b363-2928a00ffa62": Phase="Pending", Reason="", readiness=false. Elapsed: 17.494275ms May 6 20:37:17.713: INFO: Pod "pod-projected-secrets-d5931e57-7cb0-405c-b363-2928a00ffa62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057283463s May 6 20:37:19.924: INFO: Pod "pod-projected-secrets-d5931e57-7cb0-405c-b363-2928a00ffa62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.267849008s May 6 20:37:21.958: INFO: Pod "pod-projected-secrets-d5931e57-7cb0-405c-b363-2928a00ffa62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.302687306s STEP: Saw pod success May 6 20:37:21.958: INFO: Pod "pod-projected-secrets-d5931e57-7cb0-405c-b363-2928a00ffa62" satisfied condition "Succeeded or Failed" May 6 20:37:21.961: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-d5931e57-7cb0-405c-b363-2928a00ffa62 container projected-secret-volume-test: STEP: delete the pod May 6 20:37:22.040: INFO: Waiting for pod pod-projected-secrets-d5931e57-7cb0-405c-b363-2928a00ffa62 to disappear May 6 20:37:22.084: INFO: Pod pod-projected-secrets-d5931e57-7cb0-405c-b363-2928a00ffa62 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:37:22.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7544" for this suite. • [SLOW TEST:6.591 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":154,"skipped":2592,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:37:22.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 6 20:37:22.178: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:37:29.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3571" for this suite. • [SLOW TEST:7.756 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":288,"completed":155,"skipped":2615,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:37:29.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 6 20:37:30.544: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 6 20:37:32.750: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394250, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394250, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394250, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394250, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 20:37:34.797: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394250, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394250, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394250, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394250, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 20:37:37.815: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 20:37:37.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:37:39.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9775" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.310 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":288,"completed":156,"skipped":2648,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:37:39.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod May 6 20:39:40.106: INFO: Successfully updated pod "var-expansion-4a663d17-08a6-4956-889e-f251dc8cad74" STEP: waiting for pod running STEP: deleting the pod gracefully May 6 20:39:44.142: INFO: Deleting pod "var-expansion-4a663d17-08a6-4956-889e-f251dc8cad74" in namespace "var-expansion-1865" May 6 20:39:44.148: INFO: Wait up to 5m0s for pod "var-expansion-4a663d17-08a6-4956-889e-f251dc8cad74" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:40:18.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1865" for this suite. • [SLOW TEST:159.026 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":288,"completed":157,"skipped":2654,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:40:18.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-3ffce365-6052-41aa-b0c8-0a190355f5c4 STEP: Creating a pod to test consume secrets May 6 20:40:19.304: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e1c0d770-9b67-43ab-9c0a-787cfa59c6c5" in namespace "projected-5250" to be "Succeeded or Failed" May 6 20:40:19.507: INFO: Pod "pod-projected-secrets-e1c0d770-9b67-43ab-9c0a-787cfa59c6c5": Phase="Pending", Reason="", readiness=false. Elapsed: 202.847864ms May 6 20:40:21.943: INFO: Pod "pod-projected-secrets-e1c0d770-9b67-43ab-9c0a-787cfa59c6c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.638888939s May 6 20:40:24.237: INFO: Pod "pod-projected-secrets-e1c0d770-9b67-43ab-9c0a-787cfa59c6c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.932796555s May 6 20:40:26.242: INFO: Pod "pod-projected-secrets-e1c0d770-9b67-43ab-9c0a-787cfa59c6c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.937159017s STEP: Saw pod success May 6 20:40:26.242: INFO: Pod "pod-projected-secrets-e1c0d770-9b67-43ab-9c0a-787cfa59c6c5" satisfied condition "Succeeded or Failed" May 6 20:40:26.245: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-e1c0d770-9b67-43ab-9c0a-787cfa59c6c5 container projected-secret-volume-test: STEP: delete the pod May 6 20:40:26.291: INFO: Waiting for pod pod-projected-secrets-e1c0d770-9b67-43ab-9c0a-787cfa59c6c5 to disappear May 6 20:40:26.298: INFO: Pod pod-projected-secrets-e1c0d770-9b67-43ab-9c0a-787cfa59c6c5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:40:26.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5250" for this suite. • [SLOW TEST:8.096 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":158,"skipped":2689,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:40:26.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0506 20:41:07.132901 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 20:41:07.132: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:41:07.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8760" for this suite. • [SLOW TEST:40.833 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":288,"completed":159,"skipped":2701,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:41:07.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-b1454fef-9d5d-498f-94d5-1dac14e7409f STEP: Creating a pod to test consume configMaps May 6 20:41:08.861: INFO: Waiting up to 5m0s for pod "pod-configmaps-511332bd-ffa4-45c9-bf9d-aa6a6f4c50ca" in namespace "configmap-9207" to be "Succeeded or Failed" May 6 20:41:08.917: INFO: Pod "pod-configmaps-511332bd-ffa4-45c9-bf9d-aa6a6f4c50ca": Phase="Pending", Reason="", readiness=false. Elapsed: 55.958327ms May 6 20:41:10.979: INFO: Pod "pod-configmaps-511332bd-ffa4-45c9-bf9d-aa6a6f4c50ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118044213s May 6 20:41:13.147: INFO: Pod "pod-configmaps-511332bd-ffa4-45c9-bf9d-aa6a6f4c50ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.286931795s May 6 20:41:15.162: INFO: Pod "pod-configmaps-511332bd-ffa4-45c9-bf9d-aa6a6f4c50ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.301873492s STEP: Saw pod success May 6 20:41:15.163: INFO: Pod "pod-configmaps-511332bd-ffa4-45c9-bf9d-aa6a6f4c50ca" satisfied condition "Succeeded or Failed" May 6 20:41:15.187: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-511332bd-ffa4-45c9-bf9d-aa6a6f4c50ca container configmap-volume-test: STEP: delete the pod May 6 20:41:15.486: INFO: Waiting for pod pod-configmaps-511332bd-ffa4-45c9-bf9d-aa6a6f4c50ca to disappear May 6 20:41:15.491: INFO: Pod pod-configmaps-511332bd-ffa4-45c9-bf9d-aa6a6f4c50ca no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:41:15.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9207" for this suite. • [SLOW TEST:8.412 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":160,"skipped":2701,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:41:15.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 20:41:17.009: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 20:41:19.395: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394477, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394477, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394477, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394476, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 20:41:21.538: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394477, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394477, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394477, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394476, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 20:41:23.547: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394477, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394477, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394477, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394476, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 20:41:26.649: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:41:39.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4257" for this suite. STEP: Destroying namespace "webhook-4257-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:25.109 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":288,"completed":161,"skipped":2706,"failed":0} S ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:41:40.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-0cdf4d7b-908d-4b43-9a1a-06980550e23f in namespace container-probe-6150 May 6 20:41:45.571: INFO: Started pod liveness-0cdf4d7b-908d-4b43-9a1a-06980550e23f in namespace container-probe-6150 STEP: checking the pod's current state and verifying that restartCount is present May 6 20:41:45.574: INFO: Initial restart count of pod liveness-0cdf4d7b-908d-4b43-9a1a-06980550e23f is 0 May 6 20:42:03.806: INFO: Restart count of pod container-probe-6150/liveness-0cdf4d7b-908d-4b43-9a1a-06980550e23f is now 1 (18.231784281s elapsed) May 6 20:42:23.880: INFO: Restart count of pod container-probe-6150/liveness-0cdf4d7b-908d-4b43-9a1a-06980550e23f is now 2 (38.305552695s elapsed) May 6 20:42:43.920: INFO: Restart count of pod container-probe-6150/liveness-0cdf4d7b-908d-4b43-9a1a-06980550e23f is now 3 (58.345656482s elapsed) May 6 20:43:04.067: INFO: Restart count of pod container-probe-6150/liveness-0cdf4d7b-908d-4b43-9a1a-06980550e23f is now 4 (1m18.492857967s elapsed) May 6 20:44:17.523: INFO: Restart count of pod container-probe-6150/liveness-0cdf4d7b-908d-4b43-9a1a-06980550e23f is now 5 (2m31.948735251s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:44:17.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6150" for this suite. • [SLOW TEST:156.925 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":288,"completed":162,"skipped":2707,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:44:17.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-qckp STEP: Creating a pod to test atomic-volume-subpath May 6 20:44:17.683: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-qckp" in namespace "subpath-2715" to be "Succeeded or Failed" May 6 20:44:17.932: INFO: Pod "pod-subpath-test-secret-qckp": Phase="Pending", Reason="", readiness=false. Elapsed: 248.69299ms May 6 20:44:19.936: INFO: Pod "pod-subpath-test-secret-qckp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.253191656s May 6 20:44:21.940: INFO: Pod "pod-subpath-test-secret-qckp": Phase="Running", Reason="", readiness=true. Elapsed: 4.257306082s May 6 20:44:23.944: INFO: Pod "pod-subpath-test-secret-qckp": Phase="Running", Reason="", readiness=true. Elapsed: 6.261483364s May 6 20:44:25.948: INFO: Pod "pod-subpath-test-secret-qckp": Phase="Running", Reason="", readiness=true. Elapsed: 8.26550757s May 6 20:44:27.952: INFO: Pod "pod-subpath-test-secret-qckp": Phase="Running", Reason="", readiness=true. Elapsed: 10.269450493s May 6 20:44:29.956: INFO: Pod "pod-subpath-test-secret-qckp": Phase="Running", Reason="", readiness=true. Elapsed: 12.273207971s May 6 20:44:31.961: INFO: Pod "pod-subpath-test-secret-qckp": Phase="Running", Reason="", readiness=true. Elapsed: 14.277986068s May 6 20:44:33.965: INFO: Pod "pod-subpath-test-secret-qckp": Phase="Running", Reason="", readiness=true. Elapsed: 16.282199195s May 6 20:44:35.969: INFO: Pod "pod-subpath-test-secret-qckp": Phase="Running", Reason="", readiness=true. Elapsed: 18.286127866s May 6 20:44:37.974: INFO: Pod "pod-subpath-test-secret-qckp": Phase="Running", Reason="", readiness=true. Elapsed: 20.290801286s May 6 20:44:39.978: INFO: Pod "pod-subpath-test-secret-qckp": Phase="Running", Reason="", readiness=true. Elapsed: 22.295298005s May 6 20:44:41.982: INFO: Pod "pod-subpath-test-secret-qckp": Phase="Running", Reason="", readiness=true. Elapsed: 24.298953421s May 6 20:44:43.987: INFO: Pod "pod-subpath-test-secret-qckp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.30363436s STEP: Saw pod success May 6 20:44:43.987: INFO: Pod "pod-subpath-test-secret-qckp" satisfied condition "Succeeded or Failed" May 6 20:44:43.990: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-qckp container test-container-subpath-secret-qckp: STEP: delete the pod May 6 20:44:44.040: INFO: Waiting for pod pod-subpath-test-secret-qckp to disappear May 6 20:44:44.052: INFO: Pod pod-subpath-test-secret-qckp no longer exists STEP: Deleting pod pod-subpath-test-secret-qckp May 6 20:44:44.052: INFO: Deleting pod "pod-subpath-test-secret-qckp" in namespace "subpath-2715" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:44:44.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2715" for this suite. • [SLOW TEST:26.477 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":288,"completed":163,"skipped":2728,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:44:44.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-2b3b5bcd-8dc0-44e7-b57f-1364a1d738bf STEP: Creating a pod to test consume secrets May 6 20:44:44.155: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-889b3405-3148-4046-abc7-5a3bfca7171b" in namespace "projected-7592" to be "Succeeded or Failed" May 6 20:44:44.167: INFO: Pod "pod-projected-secrets-889b3405-3148-4046-abc7-5a3bfca7171b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.282829ms May 6 20:44:46.255: INFO: Pod "pod-projected-secrets-889b3405-3148-4046-abc7-5a3bfca7171b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100617715s May 6 20:44:48.363: INFO: Pod "pod-projected-secrets-889b3405-3148-4046-abc7-5a3bfca7171b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208514227s May 6 20:44:50.368: INFO: Pod "pod-projected-secrets-889b3405-3148-4046-abc7-5a3bfca7171b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.213226173s STEP: Saw pod success May 6 20:44:50.368: INFO: Pod "pod-projected-secrets-889b3405-3148-4046-abc7-5a3bfca7171b" satisfied condition "Succeeded or Failed" May 6 20:44:50.372: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-889b3405-3148-4046-abc7-5a3bfca7171b container secret-volume-test: STEP: delete the pod May 6 20:44:50.478: INFO: Waiting for pod pod-projected-secrets-889b3405-3148-4046-abc7-5a3bfca7171b to disappear May 6 20:44:50.519: INFO: Pod pod-projected-secrets-889b3405-3148-4046-abc7-5a3bfca7171b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:44:50.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7592" for this suite. • [SLOW TEST:6.463 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":164,"skipped":2740,"failed":0} SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:44:50.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 6 20:44:50.636: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:44:56.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1435" for this suite. • [SLOW TEST:5.974 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":288,"completed":165,"skipped":2744,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:44:56.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-2e192ef2-a8a6-468b-ba7f-ab171085e7c2 STEP: Creating configMap with name cm-test-opt-upd-9419c67f-f76b-4afd-8c53-22d6fa5acc01 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2e192ef2-a8a6-468b-ba7f-ab171085e7c2 STEP: Updating configmap cm-test-opt-upd-9419c67f-f76b-4afd-8c53-22d6fa5acc01 STEP: Creating configMap with name cm-test-opt-create-e12144d4-1d01-4e7c-af73-e2441da97485 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:45:06.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5964" for this suite. • [SLOW TEST:10.316 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":166,"skipped":2749,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:45:06.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-47c3edd4-2ee3-48b0-8fa5-ce71724ba6dd STEP: Creating secret with name s-test-opt-upd-1c86a5a8-4567-45f1-83de-dca17074ca5f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-47c3edd4-2ee3-48b0-8fa5-ce71724ba6dd STEP: Updating secret s-test-opt-upd-1c86a5a8-4567-45f1-83de-dca17074ca5f STEP: Creating secret with name s-test-opt-create-606fe11b-5e9b-43aa-9e4e-47bab56a4bb4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:46:32.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6336" for this suite. • [SLOW TEST:85.793 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":167,"skipped":2783,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:46:32.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 20:46:33.780: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:46:35.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6439" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":288,"completed":168,"skipped":2799,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:46:36.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 20:46:39.867: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 20:46:42.853: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394799, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394799, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394800, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394799, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 20:46:44.944: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394799, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394799, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394800, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394799, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 20:46:46.972: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394799, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394799, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394800, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724394799, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 20:46:49.973: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:46:50.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5979" for this suite. STEP: Destroying namespace "webhook-5979-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.709 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":288,"completed":169,"skipped":2806,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:46:52.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 6 20:46:57.988: INFO: Successfully updated pod "labelsupdate9dd77d0f-4838-423f-9d48-0bd4023f5e22" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:47:02.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7336" for this suite. • [SLOW TEST:10.137 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":170,"skipped":2821,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:47:02.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-6290 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6290 STEP: Deleting pre-stop pod May 6 20:47:18.393: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:47:18.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6290" for this suite. • [SLOW TEST:15.811 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":288,"completed":171,"skipped":2835,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:47:18.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 6 20:47:19.061: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e061bf04-b4f6-4f59-977d-33c189df7eca" in namespace "downward-api-5223" to be "Succeeded or Failed" May 6 20:47:19.109: INFO: Pod "downwardapi-volume-e061bf04-b4f6-4f59-977d-33c189df7eca": Phase="Pending", Reason="", readiness=false. Elapsed: 48.214872ms May 6 20:47:21.248: INFO: Pod "downwardapi-volume-e061bf04-b4f6-4f59-977d-33c189df7eca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186589809s May 6 20:47:23.252: INFO: Pod "downwardapi-volume-e061bf04-b4f6-4f59-977d-33c189df7eca": Phase="Running", Reason="", readiness=true. Elapsed: 4.190530243s May 6 20:47:25.268: INFO: Pod "downwardapi-volume-e061bf04-b4f6-4f59-977d-33c189df7eca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.207226001s STEP: Saw pod success May 6 20:47:25.268: INFO: Pod "downwardapi-volume-e061bf04-b4f6-4f59-977d-33c189df7eca" satisfied condition "Succeeded or Failed" May 6 20:47:25.271: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-e061bf04-b4f6-4f59-977d-33c189df7eca container client-container: STEP: delete the pod May 6 20:47:25.444: INFO: Waiting for pod downwardapi-volume-e061bf04-b4f6-4f59-977d-33c189df7eca to disappear May 6 20:47:25.470: INFO: Pod downwardapi-volume-e061bf04-b4f6-4f59-977d-33c189df7eca no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:47:25.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5223" for this suite. • [SLOW TEST:7.007 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":172,"skipped":2836,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:47:25.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3812 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3812 STEP: Creating statefulset with conflicting port in namespace statefulset-3812 STEP: Waiting until pod test-pod will start running in namespace statefulset-3812 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3812 May 6 20:47:31.853: INFO: Observed stateful pod in namespace: statefulset-3812, name: ss-0, uid: 64d3cf76-ad8e-4fea-b94c-29c7082bda0e, status phase: Pending. Waiting for statefulset controller to delete. May 6 20:47:32.215: INFO: Observed stateful pod in namespace: statefulset-3812, name: ss-0, uid: 64d3cf76-ad8e-4fea-b94c-29c7082bda0e, status phase: Failed. Waiting for statefulset controller to delete. May 6 20:47:32.251: INFO: Observed stateful pod in namespace: statefulset-3812, name: ss-0, uid: 64d3cf76-ad8e-4fea-b94c-29c7082bda0e, status phase: Failed. Waiting for statefulset controller to delete. May 6 20:47:32.269: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3812 STEP: Removing pod with conflicting port in namespace statefulset-3812 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3812 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 6 20:47:38.475: INFO: Deleting all statefulset in ns statefulset-3812 May 6 20:47:38.478: INFO: Scaling statefulset ss to 0 May 6 20:47:48.500: INFO: Waiting for statefulset status.replicas updated to 0 May 6 20:47:48.504: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:47:48.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3812" for this suite. • [SLOW TEST:23.071 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":288,"completed":173,"skipped":2852,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:47:48.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 6 20:47:48.625: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9176' May 6 20:47:53.904: INFO: stderr: "" May 6 20:47:53.904: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 20:47:53.904: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9176' May 6 20:47:54.054: INFO: stderr: "" May 6 20:47:54.054: INFO: stdout: "update-demo-nautilus-fszrg update-demo-nautilus-lpxbg " May 6 20:47:54.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fszrg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9176' May 6 20:47:54.234: INFO: stderr: "" May 6 20:47:54.234: INFO: stdout: "" May 6 20:47:54.234: INFO: update-demo-nautilus-fszrg is created but not running May 6 20:47:59.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9176' May 6 20:47:59.348: INFO: stderr: "" May 6 20:47:59.348: INFO: stdout: "update-demo-nautilus-fszrg update-demo-nautilus-lpxbg " May 6 20:47:59.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fszrg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9176' May 6 20:47:59.456: INFO: stderr: "" May 6 20:47:59.456: INFO: stdout: "true" May 6 20:47:59.456: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fszrg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9176' May 6 20:47:59.674: INFO: stderr: "" May 6 20:47:59.674: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 20:47:59.674: INFO: validating pod update-demo-nautilus-fszrg May 6 20:47:59.679: INFO: got data: { "image": "nautilus.jpg" } May 6 20:47:59.679: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 20:47:59.679: INFO: update-demo-nautilus-fszrg is verified up and running May 6 20:47:59.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lpxbg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9176' May 6 20:47:59.787: INFO: stderr: "" May 6 20:47:59.787: INFO: stdout: "true" May 6 20:47:59.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lpxbg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9176' May 6 20:47:59.947: INFO: stderr: "" May 6 20:47:59.947: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 20:47:59.947: INFO: validating pod update-demo-nautilus-lpxbg May 6 20:47:59.951: INFO: got data: { "image": "nautilus.jpg" } May 6 20:47:59.951: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 20:47:59.951: INFO: update-demo-nautilus-lpxbg is verified up and running STEP: scaling down the replication controller May 6 20:47:59.953: INFO: scanned /root for discovery docs: May 6 20:47:59.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9176' May 6 20:48:01.331: INFO: stderr: "" May 6 20:48:01.331: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 20:48:01.331: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9176' May 6 20:48:01.480: INFO: stderr: "" May 6 20:48:01.480: INFO: stdout: "update-demo-nautilus-fszrg update-demo-nautilus-lpxbg " STEP: Replicas for name=update-demo: expected=1 actual=2 May 6 20:48:06.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9176' May 6 20:48:06.590: INFO: stderr: "" May 6 20:48:06.590: INFO: stdout: "update-demo-nautilus-fszrg " May 6 20:48:06.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fszrg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9176' May 6 20:48:06.706: INFO: stderr: "" May 6 20:48:06.706: INFO: stdout: "true" May 6 20:48:06.706: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fszrg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9176' May 6 20:48:06.802: INFO: stderr: "" May 6 20:48:06.802: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 20:48:06.802: INFO: validating pod update-demo-nautilus-fszrg May 6 20:48:06.805: INFO: got data: { "image": "nautilus.jpg" } May 6 20:48:06.805: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 20:48:06.805: INFO: update-demo-nautilus-fszrg is verified up and running STEP: scaling up the replication controller May 6 20:48:06.808: INFO: scanned /root for discovery docs: May 6 20:48:06.808: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9176' May 6 20:48:07.940: INFO: stderr: "" May 6 20:48:07.940: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 20:48:07.940: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9176' May 6 20:48:08.061: INFO: stderr: "" May 6 20:48:08.061: INFO: stdout: "update-demo-nautilus-fszrg update-demo-nautilus-jdcvc " May 6 20:48:08.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fszrg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9176' May 6 20:48:08.159: INFO: stderr: "" May 6 20:48:08.159: INFO: stdout: "true" May 6 20:48:08.159: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fszrg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9176' May 6 20:48:08.259: INFO: stderr: "" May 6 20:48:08.259: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 20:48:08.259: INFO: validating pod update-demo-nautilus-fszrg May 6 20:48:08.261: INFO: got data: { "image": "nautilus.jpg" } May 6 20:48:08.261: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 20:48:08.261: INFO: update-demo-nautilus-fszrg is verified up and running May 6 20:48:08.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jdcvc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9176' May 6 20:48:08.358: INFO: stderr: "" May 6 20:48:08.358: INFO: stdout: "" May 6 20:48:08.358: INFO: update-demo-nautilus-jdcvc is created but not running May 6 20:48:13.358: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9176' May 6 20:48:13.476: INFO: stderr: "" May 6 20:48:13.476: INFO: stdout: "update-demo-nautilus-fszrg update-demo-nautilus-jdcvc " May 6 20:48:13.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fszrg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9176' May 6 20:48:13.576: INFO: stderr: "" May 6 20:48:13.576: INFO: stdout: "true" May 6 20:48:13.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fszrg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9176' May 6 20:48:13.664: INFO: stderr: "" May 6 20:48:13.664: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 20:48:13.664: INFO: validating pod update-demo-nautilus-fszrg May 6 20:48:13.667: INFO: got data: { "image": "nautilus.jpg" } May 6 20:48:13.667: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 20:48:13.667: INFO: update-demo-nautilus-fszrg is verified up and running May 6 20:48:13.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jdcvc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9176' May 6 20:48:13.770: INFO: stderr: "" May 6 20:48:13.770: INFO: stdout: "true" May 6 20:48:13.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jdcvc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9176' May 6 20:48:13.872: INFO: stderr: "" May 6 20:48:13.872: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 20:48:13.872: INFO: validating pod update-demo-nautilus-jdcvc May 6 20:48:13.877: INFO: got data: { "image": "nautilus.jpg" } May 6 20:48:13.877: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 20:48:13.877: INFO: update-demo-nautilus-jdcvc is verified up and running STEP: using delete to clean up resources May 6 20:48:13.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9176' May 6 20:48:13.992: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 20:48:13.992: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 6 20:48:13.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9176' May 6 20:48:14.087: INFO: stderr: "No resources found in kubectl-9176 namespace.\n" May 6 20:48:14.087: INFO: stdout: "" May 6 20:48:14.087: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9176 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 20:48:14.199: INFO: stderr: "" May 6 20:48:14.199: INFO: stdout: "update-demo-nautilus-fszrg\nupdate-demo-nautilus-jdcvc\n" May 6 20:48:14.700: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9176' May 6 20:48:14.887: INFO: stderr: "No resources found in kubectl-9176 namespace.\n" May 6 20:48:14.887: INFO: stdout: "" May 6 20:48:14.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9176 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 20:48:14.980: INFO: stderr: "" May 6 20:48:14.980: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:48:14.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9176" for this suite. • [SLOW TEST:26.435 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":288,"completed":174,"skipped":2867,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:48:14.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 6 20:48:19.453: INFO: &Pod{ObjectMeta:{send-events-d216b986-352b-422b-946c-5b5bf322f2eb events-7017 /api/v1/namespaces/events-7017/pods/send-events-d216b986-352b-422b-946c-5b5bf322f2eb 709861ed-a9da-4eb3-9e09-e0a2f5ef1986 2097007 0 2020-05-06 20:48:15 +0000 UTC map[name:foo time:381339755] map[] [] [] [{e2e.test Update v1 2020-05-06 20:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-06 20:48:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.159\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cr86p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cr86p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cr86p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 20:48:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 20:48:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 20:48:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 20:48:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.159,StartTime:2020-05-06 20:48:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 20:48:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://ff4d218178777eec127fa11546c8df2cab43b1d955d24a5126db95e6ea6289f9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.159,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 6 20:48:21.459: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 6 20:48:23.470: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:48:23.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7017" for this suite. • [SLOW TEST:8.517 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":288,"completed":175,"skipped":2896,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:48:23.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 6 20:48:23.595: INFO: Waiting up to 5m0s for pod "pod-7d98569e-7e36-43fb-ba72-3a55c57f5eb0" in namespace "emptydir-320" to be "Succeeded or Failed" May 6 20:48:23.604: INFO: Pod "pod-7d98569e-7e36-43fb-ba72-3a55c57f5eb0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.07575ms May 6 20:48:25.825: INFO: Pod "pod-7d98569e-7e36-43fb-ba72-3a55c57f5eb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229994578s May 6 20:48:27.830: INFO: Pod "pod-7d98569e-7e36-43fb-ba72-3a55c57f5eb0": Phase="Running", Reason="", readiness=true. Elapsed: 4.235104266s May 6 20:48:29.835: INFO: Pod "pod-7d98569e-7e36-43fb-ba72-3a55c57f5eb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.239586219s STEP: Saw pod success May 6 20:48:29.835: INFO: Pod "pod-7d98569e-7e36-43fb-ba72-3a55c57f5eb0" satisfied condition "Succeeded or Failed" May 6 20:48:29.838: INFO: Trying to get logs from node latest-worker2 pod pod-7d98569e-7e36-43fb-ba72-3a55c57f5eb0 container test-container: STEP: delete the pod May 6 20:48:29.883: INFO: Waiting for pod pod-7d98569e-7e36-43fb-ba72-3a55c57f5eb0 to disappear May 6 20:48:29.910: INFO: Pod pod-7d98569e-7e36-43fb-ba72-3a55c57f5eb0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:48:29.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-320" for this suite. • [SLOW TEST:6.417 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":176,"skipped":2912,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:48:29.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-6308/configmap-test-bfaa2874-e2ae-41f6-97a9-3df61a7682d0 STEP: Creating a pod to test consume configMaps May 6 20:48:30.158: INFO: Waiting up to 5m0s for pod "pod-configmaps-474160fa-dbfb-4074-80cf-80540ecb6ebe" in namespace "configmap-6308" to be "Succeeded or Failed" May 6 20:48:30.175: INFO: Pod "pod-configmaps-474160fa-dbfb-4074-80cf-80540ecb6ebe": Phase="Pending", Reason="", readiness=false. Elapsed: 16.65279ms May 6 20:48:32.190: INFO: Pod "pod-configmaps-474160fa-dbfb-4074-80cf-80540ecb6ebe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032612641s May 6 20:48:34.195: INFO: Pod "pod-configmaps-474160fa-dbfb-4074-80cf-80540ecb6ebe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037575474s STEP: Saw pod success May 6 20:48:34.196: INFO: Pod "pod-configmaps-474160fa-dbfb-4074-80cf-80540ecb6ebe" satisfied condition "Succeeded or Failed" May 6 20:48:34.199: INFO: Trying to get logs from node latest-worker pod pod-configmaps-474160fa-dbfb-4074-80cf-80540ecb6ebe container env-test: STEP: delete the pod May 6 20:48:34.238: INFO: Waiting for pod pod-configmaps-474160fa-dbfb-4074-80cf-80540ecb6ebe to disappear May 6 20:48:34.241: INFO: Pod pod-configmaps-474160fa-dbfb-4074-80cf-80540ecb6ebe no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:48:34.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6308" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":177,"skipped":2916,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:48:34.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 6 20:48:34.344: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4072 /api/v1/namespaces/watch-4072/configmaps/e2e-watch-test-label-changed c370c30c-7a96-408f-b837-1dc964deb5ab 2097129 0 2020-05-06 20:48:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-06 20:48:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 6 20:48:34.345: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4072 /api/v1/namespaces/watch-4072/configmaps/e2e-watch-test-label-changed c370c30c-7a96-408f-b837-1dc964deb5ab 2097130 0 2020-05-06 20:48:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-06 20:48:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 6 20:48:34.345: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4072 /api/v1/namespaces/watch-4072/configmaps/e2e-watch-test-label-changed c370c30c-7a96-408f-b837-1dc964deb5ab 2097131 0 2020-05-06 20:48:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-06 20:48:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 6 20:48:44.743: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4072 /api/v1/namespaces/watch-4072/configmaps/e2e-watch-test-label-changed c370c30c-7a96-408f-b837-1dc964deb5ab 2097178 0 2020-05-06 20:48:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-06 20:48:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 6 20:48:44.744: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4072 /api/v1/namespaces/watch-4072/configmaps/e2e-watch-test-label-changed c370c30c-7a96-408f-b837-1dc964deb5ab 2097179 0 2020-05-06 20:48:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-06 20:48:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 6 20:48:44.744: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4072 /api/v1/namespaces/watch-4072/configmaps/e2e-watch-test-label-changed c370c30c-7a96-408f-b837-1dc964deb5ab 2097182 0 2020-05-06 20:48:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-06 20:48:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:48:44.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4072" for this suite. • [SLOW TEST:10.523 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":288,"completed":178,"skipped":2918,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:48:44.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 6 20:48:44.832: INFO: >>> kubeConfig: /root/.kube/config May 6 20:48:47.759: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:48:58.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6169" for this suite. • [SLOW TEST:13.705 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":288,"completed":179,"skipped":2957,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:48:58.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2567.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2567.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2567.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2567.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2567.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2567.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 20:49:04.805: INFO: DNS probes using dns-2567/dns-test-38642db5-de4c-4d88-997e-9db179a9c020 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:49:04.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2567" for this suite. • [SLOW TEST:6.624 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":288,"completed":180,"skipped":2965,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:49:05.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 6 20:49:13.782: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7071 PodName:pod-sharedvolume-d52ad0e0-ab8e-419e-b184-a07ebfd1b526 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:49:13.782: INFO: >>> kubeConfig: /root/.kube/config I0506 20:49:13.816613 7 log.go:172] (0xc0052afd90) (0xc001fb3540) Create stream I0506 20:49:13.816642 7 log.go:172] (0xc0052afd90) (0xc001fb3540) Stream added, broadcasting: 1 I0506 20:49:13.819191 7 log.go:172] (0xc0052afd90) Reply frame received for 1 I0506 20:49:13.819229 7 log.go:172] (0xc0052afd90) (0xc001fb35e0) Create stream I0506 20:49:13.819248 7 log.go:172] (0xc0052afd90) (0xc001fb35e0) Stream added, broadcasting: 3 I0506 20:49:13.820398 7 log.go:172] (0xc0052afd90) Reply frame received for 3 I0506 20:49:13.820439 7 log.go:172] (0xc0052afd90) (0xc001fb3680) Create stream I0506 20:49:13.820455 7 log.go:172] (0xc0052afd90) (0xc001fb3680) Stream added, broadcasting: 5 I0506 20:49:13.821564 7 log.go:172] (0xc0052afd90) Reply frame received for 5 I0506 20:49:13.914059 7 log.go:172] (0xc0052afd90) Data frame received for 5 I0506 20:49:13.914107 7 log.go:172] (0xc001fb3680) (5) Data frame handling I0506 20:49:13.914133 7 log.go:172] (0xc0052afd90) Data frame received for 3 I0506 20:49:13.914162 7 log.go:172] (0xc001fb35e0) (3) Data frame handling I0506 20:49:13.914193 7 log.go:172] (0xc001fb35e0) (3) Data frame sent I0506 20:49:13.914240 7 log.go:172] (0xc0052afd90) Data frame received for 3 I0506 20:49:13.914257 7 log.go:172] (0xc001fb35e0) (3) Data frame handling I0506 20:49:13.915552 7 log.go:172] (0xc0052afd90) Data frame received for 1 I0506 20:49:13.915577 7 log.go:172] (0xc001fb3540) (1) Data frame handling I0506 20:49:13.915595 7 log.go:172] (0xc001fb3540) (1) Data frame sent I0506 20:49:13.915617 7 log.go:172] (0xc0052afd90) (0xc001fb3540) Stream removed, broadcasting: 1 I0506 20:49:13.915632 7 log.go:172] (0xc0052afd90) Go away received I0506 20:49:13.915906 7 log.go:172] (0xc0052afd90) (0xc001fb3540) Stream removed, broadcasting: 1 I0506 20:49:13.915952 7 log.go:172] (0xc0052afd90) (0xc001fb35e0) Stream removed, broadcasting: 3 I0506 20:49:13.915976 7 log.go:172] (0xc0052afd90) (0xc001fb3680) Stream removed, broadcasting: 5 May 6 20:49:13.915: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:49:13.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7071" for this suite. • [SLOW TEST:8.821 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":288,"completed":181,"skipped":2970,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:49:13.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-2123 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2123 STEP: creating replication controller externalsvc in namespace services-2123 I0506 20:49:14.170861 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2123, replica count: 2 I0506 20:49:17.221575 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 20:49:20.221835 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 20:49:23.222098 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 6 20:49:23.371: INFO: Creating new exec pod May 6 20:49:27.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2123 execpodxthjz -- /bin/sh -x -c nslookup clusterip-service' May 6 20:49:27.612: INFO: stderr: "I0506 20:49:27.534274 2831 log.go:172] (0xc00003b550) (0xc00036d4a0) Create stream\nI0506 20:49:27.534334 2831 log.go:172] (0xc00003b550) (0xc00036d4a0) Stream added, broadcasting: 1\nI0506 20:49:27.536938 2831 log.go:172] (0xc00003b550) Reply frame received for 1\nI0506 20:49:27.536988 2831 log.go:172] (0xc00003b550) (0xc0000d61e0) Create stream\nI0506 20:49:27.537010 2831 log.go:172] (0xc00003b550) (0xc0000d61e0) Stream added, broadcasting: 3\nI0506 20:49:27.537995 2831 log.go:172] (0xc00003b550) Reply frame received for 3\nI0506 20:49:27.538034 2831 log.go:172] (0xc00003b550) (0xc000220640) Create stream\nI0506 20:49:27.538045 2831 log.go:172] (0xc00003b550) (0xc000220640) Stream added, broadcasting: 5\nI0506 20:49:27.538944 2831 log.go:172] (0xc00003b550) Reply frame received for 5\nI0506 20:49:27.597569 2831 log.go:172] (0xc00003b550) Data frame received for 5\nI0506 20:49:27.597589 2831 log.go:172] (0xc000220640) (5) Data frame handling\nI0506 20:49:27.597600 2831 log.go:172] (0xc000220640) (5) Data frame sent\n+ nslookup clusterip-service\nI0506 20:49:27.603732 2831 log.go:172] (0xc00003b550) Data frame received for 3\nI0506 20:49:27.603763 2831 log.go:172] (0xc0000d61e0) (3) Data frame handling\nI0506 20:49:27.603785 2831 log.go:172] (0xc0000d61e0) (3) Data frame sent\nI0506 20:49:27.604506 2831 log.go:172] (0xc00003b550) Data frame received for 3\nI0506 20:49:27.604527 2831 log.go:172] (0xc0000d61e0) (3) Data frame handling\nI0506 20:49:27.604546 2831 log.go:172] (0xc0000d61e0) (3) Data frame sent\nI0506 20:49:27.604986 2831 log.go:172] (0xc00003b550) Data frame received for 5\nI0506 20:49:27.605022 2831 log.go:172] (0xc000220640) (5) Data frame handling\nI0506 20:49:27.605042 2831 log.go:172] (0xc00003b550) Data frame received for 3\nI0506 20:49:27.605051 2831 log.go:172] (0xc0000d61e0) (3) Data frame handling\nI0506 20:49:27.607140 2831 log.go:172] (0xc00003b550) Data frame received for 1\nI0506 20:49:27.607158 2831 log.go:172] (0xc00036d4a0) (1) Data frame handling\nI0506 20:49:27.607175 2831 log.go:172] (0xc00036d4a0) (1) Data frame sent\nI0506 20:49:27.607185 2831 log.go:172] (0xc00003b550) (0xc00036d4a0) Stream removed, broadcasting: 1\nI0506 20:49:27.607200 2831 log.go:172] (0xc00003b550) Go away received\nI0506 20:49:27.607709 2831 log.go:172] (0xc00003b550) (0xc00036d4a0) Stream removed, broadcasting: 1\nI0506 20:49:27.607734 2831 log.go:172] (0xc00003b550) (0xc0000d61e0) Stream removed, broadcasting: 3\nI0506 20:49:27.607747 2831 log.go:172] (0xc00003b550) (0xc000220640) Stream removed, broadcasting: 5\n" May 6 20:49:27.612: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-2123.svc.cluster.local\tcanonical name = externalsvc.services-2123.svc.cluster.local.\nName:\texternalsvc.services-2123.svc.cluster.local\nAddress: 10.97.115.234\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2123, will wait for the garbage collector to delete the pods May 6 20:49:27.672: INFO: Deleting ReplicationController externalsvc took: 6.066405ms May 6 20:49:28.072: INFO: Terminating ReplicationController externalsvc pods took: 400.468967ms May 6 20:49:35.348: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:49:35.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2123" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:21.506 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":288,"completed":182,"skipped":2977,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:49:35.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-5b8a718b-d0a2-4232-b5d5-3e082e859d06 STEP: Creating a pod to test consume secrets May 6 20:49:35.521: INFO: Waiting up to 5m0s for pod "pod-secrets-4b1ad020-8aa9-4248-baf8-357da0278870" in namespace "secrets-1012" to be "Succeeded or Failed" May 6 20:49:35.606: INFO: Pod "pod-secrets-4b1ad020-8aa9-4248-baf8-357da0278870": Phase="Pending", Reason="", readiness=false. Elapsed: 85.256302ms May 6 20:49:37.671: INFO: Pod "pod-secrets-4b1ad020-8aa9-4248-baf8-357da0278870": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149895666s May 6 20:49:39.983: INFO: Pod "pod-secrets-4b1ad020-8aa9-4248-baf8-357da0278870": Phase="Pending", Reason="", readiness=false. Elapsed: 4.461630478s May 6 20:49:42.112: INFO: Pod "pod-secrets-4b1ad020-8aa9-4248-baf8-357da0278870": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.591352098s STEP: Saw pod success May 6 20:49:42.112: INFO: Pod "pod-secrets-4b1ad020-8aa9-4248-baf8-357da0278870" satisfied condition "Succeeded or Failed" May 6 20:49:42.124: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-4b1ad020-8aa9-4248-baf8-357da0278870 container secret-volume-test: STEP: delete the pod May 6 20:49:42.249: INFO: Waiting for pod pod-secrets-4b1ad020-8aa9-4248-baf8-357da0278870 to disappear May 6 20:49:42.267: INFO: Pod pod-secrets-4b1ad020-8aa9-4248-baf8-357da0278870 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:49:42.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1012" for this suite. • [SLOW TEST:6.848 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":183,"skipped":2990,"failed":0} SSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:49:42.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 6 20:49:54.653: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9187 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:49:54.653: INFO: >>> kubeConfig: /root/.kube/config I0506 20:49:54.707017 7 log.go:172] (0xc00580a6e0) (0xc002bae780) Create stream I0506 20:49:54.707049 7 log.go:172] (0xc00580a6e0) (0xc002bae780) Stream added, broadcasting: 1 I0506 20:49:54.708527 7 log.go:172] (0xc00580a6e0) Reply frame received for 1 I0506 20:49:54.708567 7 log.go:172] (0xc00580a6e0) (0xc001426a00) Create stream I0506 20:49:54.708579 7 log.go:172] (0xc00580a6e0) (0xc001426a00) Stream added, broadcasting: 3 I0506 20:49:54.709613 7 log.go:172] (0xc00580a6e0) Reply frame received for 3 I0506 20:49:54.709645 7 log.go:172] (0xc00580a6e0) (0xc002bae8c0) Create stream I0506 20:49:54.709656 7 log.go:172] (0xc00580a6e0) (0xc002bae8c0) Stream added, broadcasting: 5 I0506 20:49:54.710616 7 log.go:172] (0xc00580a6e0) Reply frame received for 5 I0506 20:49:54.784078 7 log.go:172] (0xc00580a6e0) Data frame received for 5 I0506 20:49:54.784117 7 log.go:172] (0xc002bae8c0) (5) Data frame handling I0506 20:49:54.784139 7 log.go:172] (0xc00580a6e0) Data frame received for 3 I0506 20:49:54.784154 7 log.go:172] (0xc001426a00) (3) Data frame handling I0506 20:49:54.784167 7 log.go:172] (0xc001426a00) (3) Data frame sent I0506 20:49:54.784178 7 log.go:172] (0xc00580a6e0) Data frame received for 3 I0506 20:49:54.784185 7 log.go:172] (0xc001426a00) (3) Data frame handling I0506 20:49:54.785248 7 log.go:172] (0xc00580a6e0) Data frame received for 1 I0506 20:49:54.785266 7 log.go:172] (0xc002bae780) (1) Data frame handling I0506 20:49:54.785279 7 log.go:172] (0xc002bae780) (1) Data frame sent I0506 20:49:54.785297 7 log.go:172] (0xc00580a6e0) (0xc002bae780) Stream removed, broadcasting: 1 I0506 20:49:54.785310 7 log.go:172] (0xc00580a6e0) Go away received I0506 20:49:54.785467 7 log.go:172] (0xc00580a6e0) (0xc002bae780) Stream removed, broadcasting: 1 I0506 20:49:54.785491 7 log.go:172] (0xc00580a6e0) (0xc001426a00) Stream removed, broadcasting: 3 I0506 20:49:54.785521 7 log.go:172] (0xc00580a6e0) (0xc002bae8c0) Stream removed, broadcasting: 5 May 6 20:49:54.785: INFO: Exec stderr: "" May 6 20:49:54.785: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9187 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:49:54.785: INFO: >>> kubeConfig: /root/.kube/config I0506 20:49:54.814952 7 log.go:172] (0xc0058a4630) (0xc0024a0dc0) Create stream I0506 20:49:54.814978 7 log.go:172] (0xc0058a4630) (0xc0024a0dc0) Stream added, broadcasting: 1 I0506 20:49:54.822008 7 log.go:172] (0xc0058a4630) Reply frame received for 1 I0506 20:49:54.822050 7 log.go:172] (0xc0058a4630) (0xc002baeaa0) Create stream I0506 20:49:54.822058 7 log.go:172] (0xc0058a4630) (0xc002baeaa0) Stream added, broadcasting: 3 I0506 20:49:54.823351 7 log.go:172] (0xc0058a4630) Reply frame received for 3 I0506 20:49:54.823388 7 log.go:172] (0xc0058a4630) (0xc002baeb40) Create stream I0506 20:49:54.823400 7 log.go:172] (0xc0058a4630) (0xc002baeb40) Stream added, broadcasting: 5 I0506 20:49:54.824101 7 log.go:172] (0xc0058a4630) Reply frame received for 5 I0506 20:49:54.948782 7 log.go:172] (0xc0058a4630) Data frame received for 5 I0506 20:49:54.948842 7 log.go:172] (0xc002baeb40) (5) Data frame handling I0506 20:49:54.948886 7 log.go:172] (0xc0058a4630) Data frame received for 3 I0506 20:49:54.948904 7 log.go:172] (0xc002baeaa0) (3) Data frame handling I0506 20:49:54.948924 7 log.go:172] (0xc002baeaa0) (3) Data frame sent I0506 20:49:54.948944 7 log.go:172] (0xc0058a4630) Data frame received for 3 I0506 20:49:54.948959 7 log.go:172] (0xc002baeaa0) (3) Data frame handling I0506 20:49:54.950470 7 log.go:172] (0xc0058a4630) Data frame received for 1 I0506 20:49:54.950503 7 log.go:172] (0xc0024a0dc0) (1) Data frame handling I0506 20:49:54.950530 7 log.go:172] (0xc0024a0dc0) (1) Data frame sent I0506 20:49:54.950557 7 log.go:172] (0xc0058a4630) (0xc0024a0dc0) Stream removed, broadcasting: 1 I0506 20:49:54.950673 7 log.go:172] (0xc0058a4630) (0xc0024a0dc0) Stream removed, broadcasting: 1 I0506 20:49:54.950702 7 log.go:172] (0xc0058a4630) (0xc002baeaa0) Stream removed, broadcasting: 3 I0506 20:49:54.950723 7 log.go:172] (0xc0058a4630) (0xc002baeb40) Stream removed, broadcasting: 5 May 6 20:49:54.950: INFO: Exec stderr: "" May 6 20:49:54.950: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9187 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:49:54.950: INFO: >>> kubeConfig: /root/.kube/config I0506 20:49:54.953282 7 log.go:172] (0xc0058a4630) Go away received I0506 20:49:54.979670 7 log.go:172] (0xc00580ad10) (0xc002baee60) Create stream I0506 20:49:54.979694 7 log.go:172] (0xc00580ad10) (0xc002baee60) Stream added, broadcasting: 1 I0506 20:49:54.981533 7 log.go:172] (0xc00580ad10) Reply frame received for 1 I0506 20:49:54.981609 7 log.go:172] (0xc00580ad10) (0xc002baefa0) Create stream I0506 20:49:54.981629 7 log.go:172] (0xc00580ad10) (0xc002baefa0) Stream added, broadcasting: 3 I0506 20:49:54.982921 7 log.go:172] (0xc00580ad10) Reply frame received for 3 I0506 20:49:54.982945 7 log.go:172] (0xc00580ad10) (0xc00116cfa0) Create stream I0506 20:49:54.982956 7 log.go:172] (0xc00580ad10) (0xc00116cfa0) Stream added, broadcasting: 5 I0506 20:49:54.984037 7 log.go:172] (0xc00580ad10) Reply frame received for 5 I0506 20:49:55.040822 7 log.go:172] (0xc00580ad10) Data frame received for 5 I0506 20:49:55.040864 7 log.go:172] (0xc00116cfa0) (5) Data frame handling I0506 20:49:55.040897 7 log.go:172] (0xc00580ad10) Data frame received for 3 I0506 20:49:55.040913 7 log.go:172] (0xc002baefa0) (3) Data frame handling I0506 20:49:55.040932 7 log.go:172] (0xc002baefa0) (3) Data frame sent I0506 20:49:55.040947 7 log.go:172] (0xc00580ad10) Data frame received for 3 I0506 20:49:55.040959 7 log.go:172] (0xc002baefa0) (3) Data frame handling I0506 20:49:55.042346 7 log.go:172] (0xc00580ad10) Data frame received for 1 I0506 20:49:55.042379 7 log.go:172] (0xc002baee60) (1) Data frame handling I0506 20:49:55.042428 7 log.go:172] (0xc002baee60) (1) Data frame sent I0506 20:49:55.042446 7 log.go:172] (0xc00580ad10) (0xc002baee60) Stream removed, broadcasting: 1 I0506 20:49:55.042520 7 log.go:172] (0xc00580ad10) Go away received I0506 20:49:55.042591 7 log.go:172] (0xc00580ad10) (0xc002baee60) Stream removed, broadcasting: 1 I0506 20:49:55.042603 7 log.go:172] (0xc00580ad10) (0xc002baefa0) Stream removed, broadcasting: 3 I0506 20:49:55.042609 7 log.go:172] (0xc00580ad10) (0xc00116cfa0) Stream removed, broadcasting: 5 May 6 20:49:55.042: INFO: Exec stderr: "" May 6 20:49:55.042: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9187 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:49:55.042: INFO: >>> kubeConfig: /root/.kube/config I0506 20:49:55.074274 7 log.go:172] (0xc0058a4c60) (0xc0024a0fa0) Create stream I0506 20:49:55.074304 7 log.go:172] (0xc0058a4c60) (0xc0024a0fa0) Stream added, broadcasting: 1 I0506 20:49:55.076647 7 log.go:172] (0xc0058a4c60) Reply frame received for 1 I0506 20:49:55.076691 7 log.go:172] (0xc0058a4c60) (0xc0029a5b80) Create stream I0506 20:49:55.076702 7 log.go:172] (0xc0058a4c60) (0xc0029a5b80) Stream added, broadcasting: 3 I0506 20:49:55.077909 7 log.go:172] (0xc0058a4c60) Reply frame received for 3 I0506 20:49:55.077955 7 log.go:172] (0xc0058a4c60) (0xc0024a10e0) Create stream I0506 20:49:55.077977 7 log.go:172] (0xc0058a4c60) (0xc0024a10e0) Stream added, broadcasting: 5 I0506 20:49:55.078775 7 log.go:172] (0xc0058a4c60) Reply frame received for 5 I0506 20:49:55.156397 7 log.go:172] (0xc0058a4c60) Data frame received for 5 I0506 20:49:55.156437 7 log.go:172] (0xc0024a10e0) (5) Data frame handling I0506 20:49:55.156472 7 log.go:172] (0xc0058a4c60) Data frame received for 3 I0506 20:49:55.156492 7 log.go:172] (0xc0029a5b80) (3) Data frame handling I0506 20:49:55.156509 7 log.go:172] (0xc0029a5b80) (3) Data frame sent I0506 20:49:55.156522 7 log.go:172] (0xc0058a4c60) Data frame received for 3 I0506 20:49:55.156532 7 log.go:172] (0xc0029a5b80) (3) Data frame handling I0506 20:49:55.157861 7 log.go:172] (0xc0058a4c60) Data frame received for 1 I0506 20:49:55.157917 7 log.go:172] (0xc0024a0fa0) (1) Data frame handling I0506 20:49:55.157960 7 log.go:172] (0xc0024a0fa0) (1) Data frame sent I0506 20:49:55.157982 7 log.go:172] (0xc0058a4c60) (0xc0024a0fa0) Stream removed, broadcasting: 1 I0506 20:49:55.158018 7 log.go:172] (0xc0058a4c60) Go away received I0506 20:49:55.158152 7 log.go:172] (0xc0058a4c60) (0xc0024a0fa0) Stream removed, broadcasting: 1 I0506 20:49:55.158184 7 log.go:172] (0xc0058a4c60) (0xc0029a5b80) Stream removed, broadcasting: 3 I0506 20:49:55.158196 7 log.go:172] (0xc0058a4c60) (0xc0024a10e0) Stream removed, broadcasting: 5 May 6 20:49:55.158: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 6 20:49:55.158: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9187 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:49:55.158: INFO: >>> kubeConfig: /root/.kube/config I0506 20:49:55.185779 7 log.go:172] (0xc005d64630) (0xc0029a5ea0) Create stream I0506 20:49:55.185812 7 log.go:172] (0xc005d64630) (0xc0029a5ea0) Stream added, broadcasting: 1 I0506 20:49:55.187601 7 log.go:172] (0xc005d64630) Reply frame received for 1 I0506 20:49:55.187633 7 log.go:172] (0xc005d64630) (0xc001426b40) Create stream I0506 20:49:55.187644 7 log.go:172] (0xc005d64630) (0xc001426b40) Stream added, broadcasting: 3 I0506 20:49:55.188557 7 log.go:172] (0xc005d64630) Reply frame received for 3 I0506 20:49:55.188610 7 log.go:172] (0xc005d64630) (0xc001426c80) Create stream I0506 20:49:55.188626 7 log.go:172] (0xc005d64630) (0xc001426c80) Stream added, broadcasting: 5 I0506 20:49:55.189733 7 log.go:172] (0xc005d64630) Reply frame received for 5 I0506 20:49:55.254287 7 log.go:172] (0xc005d64630) Data frame received for 5 I0506 20:49:55.254319 7 log.go:172] (0xc001426c80) (5) Data frame handling I0506 20:49:55.254368 7 log.go:172] (0xc005d64630) Data frame received for 3 I0506 20:49:55.254399 7 log.go:172] (0xc001426b40) (3) Data frame handling I0506 20:49:55.254447 7 log.go:172] (0xc001426b40) (3) Data frame sent I0506 20:49:55.254467 7 log.go:172] (0xc005d64630) Data frame received for 3 I0506 20:49:55.254477 7 log.go:172] (0xc001426b40) (3) Data frame handling I0506 20:49:55.255598 7 log.go:172] (0xc005d64630) Data frame received for 1 I0506 20:49:55.255632 7 log.go:172] (0xc0029a5ea0) (1) Data frame handling I0506 20:49:55.255669 7 log.go:172] (0xc0029a5ea0) (1) Data frame sent I0506 20:49:55.255697 7 log.go:172] (0xc005d64630) (0xc0029a5ea0) Stream removed, broadcasting: 1 I0506 20:49:55.255726 7 log.go:172] (0xc005d64630) Go away received I0506 20:49:55.255811 7 log.go:172] (0xc005d64630) (0xc0029a5ea0) Stream removed, broadcasting: 1 I0506 20:49:55.255828 7 log.go:172] (0xc005d64630) (0xc001426b40) Stream removed, broadcasting: 3 I0506 20:49:55.255840 7 log.go:172] (0xc005d64630) (0xc001426c80) Stream removed, broadcasting: 5 May 6 20:49:55.255: INFO: Exec stderr: "" May 6 20:49:55.255: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9187 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:49:55.255: INFO: >>> kubeConfig: /root/.kube/config I0506 20:49:55.295664 7 log.go:172] (0xc0058a4f20) (0xc0024a1180) Create stream I0506 20:49:55.295700 7 log.go:172] (0xc0058a4f20) (0xc0024a1180) Stream added, broadcasting: 1 I0506 20:49:55.297916 7 log.go:172] (0xc0058a4f20) Reply frame received for 1 I0506 20:49:55.297949 7 log.go:172] (0xc0058a4f20) (0xc00116d2c0) Create stream I0506 20:49:55.297959 7 log.go:172] (0xc0058a4f20) (0xc00116d2c0) Stream added, broadcasting: 3 I0506 20:49:55.298939 7 log.go:172] (0xc0058a4f20) Reply frame received for 3 I0506 20:49:55.298965 7 log.go:172] (0xc0058a4f20) (0xc001eb0320) Create stream I0506 20:49:55.298975 7 log.go:172] (0xc0058a4f20) (0xc001eb0320) Stream added, broadcasting: 5 I0506 20:49:55.299620 7 log.go:172] (0xc0058a4f20) Reply frame received for 5 I0506 20:49:55.368377 7 log.go:172] (0xc0058a4f20) Data frame received for 5 I0506 20:49:55.368402 7 log.go:172] (0xc001eb0320) (5) Data frame handling I0506 20:49:55.368424 7 log.go:172] (0xc0058a4f20) Data frame received for 3 I0506 20:49:55.368431 7 log.go:172] (0xc00116d2c0) (3) Data frame handling I0506 20:49:55.368440 7 log.go:172] (0xc00116d2c0) (3) Data frame sent I0506 20:49:55.368448 7 log.go:172] (0xc0058a4f20) Data frame received for 3 I0506 20:49:55.368460 7 log.go:172] (0xc00116d2c0) (3) Data frame handling I0506 20:49:55.369970 7 log.go:172] (0xc0058a4f20) Data frame received for 1 I0506 20:49:55.369989 7 log.go:172] (0xc0024a1180) (1) Data frame handling I0506 20:49:55.370008 7 log.go:172] (0xc0024a1180) (1) Data frame sent I0506 20:49:55.370295 7 log.go:172] (0xc0058a4f20) (0xc0024a1180) Stream removed, broadcasting: 1 I0506 20:49:55.370321 7 log.go:172] (0xc0058a4f20) Go away received I0506 20:49:55.370499 7 log.go:172] (0xc0058a4f20) (0xc0024a1180) Stream removed, broadcasting: 1 I0506 20:49:55.370528 7 log.go:172] (0xc0058a4f20) (0xc00116d2c0) Stream removed, broadcasting: 3 I0506 20:49:55.370546 7 log.go:172] (0xc0058a4f20) (0xc001eb0320) Stream removed, broadcasting: 5 May 6 20:49:55.370: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 6 20:49:55.370: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9187 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:49:55.370: INFO: >>> kubeConfig: /root/.kube/config I0506 20:49:55.395447 7 log.go:172] (0xc005d4c840) (0xc00116d9a0) Create stream I0506 20:49:55.395474 7 log.go:172] (0xc005d4c840) (0xc00116d9a0) Stream added, broadcasting: 1 I0506 20:49:55.397776 7 log.go:172] (0xc005d4c840) Reply frame received for 1 I0506 20:49:55.397828 7 log.go:172] (0xc005d4c840) (0xc0024a1400) Create stream I0506 20:49:55.397844 7 log.go:172] (0xc005d4c840) (0xc0024a1400) Stream added, broadcasting: 3 I0506 20:49:55.398831 7 log.go:172] (0xc005d4c840) Reply frame received for 3 I0506 20:49:55.398861 7 log.go:172] (0xc005d4c840) (0xc0024a1540) Create stream I0506 20:49:55.398870 7 log.go:172] (0xc005d4c840) (0xc0024a1540) Stream added, broadcasting: 5 I0506 20:49:55.399637 7 log.go:172] (0xc005d4c840) Reply frame received for 5 I0506 20:49:55.473378 7 log.go:172] (0xc005d4c840) Data frame received for 3 I0506 20:49:55.473407 7 log.go:172] (0xc0024a1400) (3) Data frame handling I0506 20:49:55.473429 7 log.go:172] (0xc0024a1400) (3) Data frame sent I0506 20:49:55.473436 7 log.go:172] (0xc005d4c840) Data frame received for 3 I0506 20:49:55.473442 7 log.go:172] (0xc0024a1400) (3) Data frame handling I0506 20:49:55.473459 7 log.go:172] (0xc005d4c840) Data frame received for 5 I0506 20:49:55.473479 7 log.go:172] (0xc0024a1540) (5) Data frame handling I0506 20:49:55.475467 7 log.go:172] (0xc005d4c840) Data frame received for 1 I0506 20:49:55.475483 7 log.go:172] (0xc00116d9a0) (1) Data frame handling I0506 20:49:55.475495 7 log.go:172] (0xc00116d9a0) (1) Data frame sent I0506 20:49:55.475504 7 log.go:172] (0xc005d4c840) (0xc00116d9a0) Stream removed, broadcasting: 1 I0506 20:49:55.475578 7 log.go:172] (0xc005d4c840) (0xc00116d9a0) Stream removed, broadcasting: 1 I0506 20:49:55.475589 7 log.go:172] (0xc005d4c840) (0xc0024a1400) Stream removed, broadcasting: 3 I0506 20:49:55.475698 7 log.go:172] (0xc005d4c840) (0xc0024a1540) Stream removed, broadcasting: 5 I0506 20:49:55.475725 7 log.go:172] (0xc005d4c840) Go away received May 6 20:49:55.475: INFO: Exec stderr: "" May 6 20:49:55.475: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9187 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:49:55.475: INFO: >>> kubeConfig: /root/.kube/config I0506 20:49:55.511822 7 log.go:172] (0xc0061646e0) (0xc0014275e0) Create stream I0506 20:49:55.511848 7 log.go:172] (0xc0061646e0) (0xc0014275e0) Stream added, broadcasting: 1 I0506 20:49:55.513919 7 log.go:172] (0xc0061646e0) Reply frame received for 1 I0506 20:49:55.513963 7 log.go:172] (0xc0061646e0) (0xc002baf180) Create stream I0506 20:49:55.513973 7 log.go:172] (0xc0061646e0) (0xc002baf180) Stream added, broadcasting: 3 I0506 20:49:55.514710 7 log.go:172] (0xc0061646e0) Reply frame received for 3 I0506 20:49:55.514741 7 log.go:172] (0xc0061646e0) (0xc002baf220) Create stream I0506 20:49:55.514756 7 log.go:172] (0xc0061646e0) (0xc002baf220) Stream added, broadcasting: 5 I0506 20:49:55.515544 7 log.go:172] (0xc0061646e0) Reply frame received for 5 I0506 20:49:55.580589 7 log.go:172] (0xc0061646e0) Data frame received for 5 I0506 20:49:55.580637 7 log.go:172] (0xc002baf220) (5) Data frame handling I0506 20:49:55.580667 7 log.go:172] (0xc0061646e0) Data frame received for 3 I0506 20:49:55.580689 7 log.go:172] (0xc002baf180) (3) Data frame handling I0506 20:49:55.580717 7 log.go:172] (0xc002baf180) (3) Data frame sent I0506 20:49:55.580734 7 log.go:172] (0xc0061646e0) Data frame received for 3 I0506 20:49:55.580746 7 log.go:172] (0xc002baf180) (3) Data frame handling I0506 20:49:55.582477 7 log.go:172] (0xc0061646e0) Data frame received for 1 I0506 20:49:55.582500 7 log.go:172] (0xc0014275e0) (1) Data frame handling I0506 20:49:55.582514 7 log.go:172] (0xc0014275e0) (1) Data frame sent I0506 20:49:55.582527 7 log.go:172] (0xc0061646e0) (0xc0014275e0) Stream removed, broadcasting: 1 I0506 20:49:55.582615 7 log.go:172] (0xc0061646e0) (0xc0014275e0) Stream removed, broadcasting: 1 I0506 20:49:55.582641 7 log.go:172] (0xc0061646e0) (0xc002baf180) Stream removed, broadcasting: 3 I0506 20:49:55.582801 7 log.go:172] (0xc0061646e0) (0xc002baf220) Stream removed, broadcasting: 5 May 6 20:49:55.583: INFO: Exec stderr: "" May 6 20:49:55.583: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9187 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:49:55.583: INFO: >>> kubeConfig: /root/.kube/config I0506 20:49:55.583175 7 log.go:172] (0xc0061646e0) Go away received I0506 20:49:55.612431 7 log.go:172] (0xc005d4ce70) (0xc00179a320) Create stream I0506 20:49:55.612467 7 log.go:172] (0xc005d4ce70) (0xc00179a320) Stream added, broadcasting: 1 I0506 20:49:55.614798 7 log.go:172] (0xc005d4ce70) Reply frame received for 1 I0506 20:49:55.614834 7 log.go:172] (0xc005d4ce70) (0xc0024a1680) Create stream I0506 20:49:55.614848 7 log.go:172] (0xc005d4ce70) (0xc0024a1680) Stream added, broadcasting: 3 I0506 20:49:55.615998 7 log.go:172] (0xc005d4ce70) Reply frame received for 3 I0506 20:49:55.616044 7 log.go:172] (0xc005d4ce70) (0xc0014277c0) Create stream I0506 20:49:55.616066 7 log.go:172] (0xc005d4ce70) (0xc0014277c0) Stream added, broadcasting: 5 I0506 20:49:55.617317 7 log.go:172] (0xc005d4ce70) Reply frame received for 5 I0506 20:49:55.683260 7 log.go:172] (0xc005d4ce70) Data frame received for 3 I0506 20:49:55.683292 7 log.go:172] (0xc0024a1680) (3) Data frame handling I0506 20:49:55.683308 7 log.go:172] (0xc0024a1680) (3) Data frame sent I0506 20:49:55.683327 7 log.go:172] (0xc005d4ce70) Data frame received for 3 I0506 20:49:55.683338 7 log.go:172] (0xc0024a1680) (3) Data frame handling I0506 20:49:55.683361 7 log.go:172] (0xc005d4ce70) Data frame received for 5 I0506 20:49:55.683374 7 log.go:172] (0xc0014277c0) (5) Data frame handling I0506 20:49:55.684761 7 log.go:172] (0xc005d4ce70) Data frame received for 1 I0506 20:49:55.684794 7 log.go:172] (0xc00179a320) (1) Data frame handling I0506 20:49:55.684817 7 log.go:172] (0xc00179a320) (1) Data frame sent I0506 20:49:55.684839 7 log.go:172] (0xc005d4ce70) (0xc00179a320) Stream removed, broadcasting: 1 I0506 20:49:55.684861 7 log.go:172] (0xc005d4ce70) Go away received I0506 20:49:55.685039 7 log.go:172] (0xc005d4ce70) (0xc00179a320) Stream removed, broadcasting: 1 I0506 20:49:55.685076 7 log.go:172] (0xc005d4ce70) (0xc0024a1680) Stream removed, broadcasting: 3 I0506 20:49:55.685100 7 log.go:172] (0xc005d4ce70) (0xc0014277c0) Stream removed, broadcasting: 5 May 6 20:49:55.685: INFO: Exec stderr: "" May 6 20:49:55.685: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9187 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:49:55.685: INFO: >>> kubeConfig: /root/.kube/config I0506 20:49:55.724505 7 log.go:172] (0xc005d64fd0) (0xc001eb05a0) Create stream I0506 20:49:55.724557 7 log.go:172] (0xc005d64fd0) (0xc001eb05a0) Stream added, broadcasting: 1 I0506 20:49:55.732888 7 log.go:172] (0xc005d64fd0) Reply frame received for 1 I0506 20:49:55.732949 7 log.go:172] (0xc005d64fd0) (0xc002bae000) Create stream I0506 20:49:55.732968 7 log.go:172] (0xc005d64fd0) (0xc002bae000) Stream added, broadcasting: 3 I0506 20:49:55.733898 7 log.go:172] (0xc005d64fd0) Reply frame received for 3 I0506 20:49:55.733947 7 log.go:172] (0xc005d64fd0) (0xc001eb0280) Create stream I0506 20:49:55.733961 7 log.go:172] (0xc005d64fd0) (0xc001eb0280) Stream added, broadcasting: 5 I0506 20:49:55.734598 7 log.go:172] (0xc005d64fd0) Reply frame received for 5 I0506 20:49:55.789020 7 log.go:172] (0xc005d64fd0) Data frame received for 5 I0506 20:49:55.789051 7 log.go:172] (0xc001eb0280) (5) Data frame handling I0506 20:49:55.789074 7 log.go:172] (0xc005d64fd0) Data frame received for 3 I0506 20:49:55.789088 7 log.go:172] (0xc002bae000) (3) Data frame handling I0506 20:49:55.789107 7 log.go:172] (0xc002bae000) (3) Data frame sent I0506 20:49:55.789324 7 log.go:172] (0xc005d64fd0) Data frame received for 3 I0506 20:49:55.789336 7 log.go:172] (0xc002bae000) (3) Data frame handling I0506 20:49:55.791207 7 log.go:172] (0xc005d64fd0) Data frame received for 1 I0506 20:49:55.791264 7 log.go:172] (0xc001eb05a0) (1) Data frame handling I0506 20:49:55.791305 7 log.go:172] (0xc001eb05a0) (1) Data frame sent I0506 20:49:55.791330 7 log.go:172] (0xc005d64fd0) (0xc001eb05a0) Stream removed, broadcasting: 1 I0506 20:49:55.791359 7 log.go:172] (0xc005d64fd0) Go away received I0506 20:49:55.791506 7 log.go:172] (0xc005d64fd0) (0xc001eb05a0) Stream removed, broadcasting: 1 I0506 20:49:55.791524 7 log.go:172] (0xc005d64fd0) (0xc002bae000) Stream removed, broadcasting: 3 I0506 20:49:55.791532 7 log.go:172] (0xc005d64fd0) (0xc001eb0280) Stream removed, broadcasting: 5 May 6 20:49:55.791: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:49:55.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-9187" for this suite. • [SLOW TEST:13.522 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":184,"skipped":2994,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:49:55.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions May 6 20:49:55.936: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' May 6 20:49:56.235: INFO: stderr: "" May 6 20:49:56.235: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:49:56.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9420" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":288,"completed":185,"skipped":3017,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:49:56.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-6f68060c-6f2f-498c-b23c-5aa2315adbdc STEP: Creating a pod to test consume configMaps May 6 20:49:56.346: INFO: Waiting up to 5m0s for pod "pod-configmaps-67e41ee0-5487-4484-92a3-3d9cfe5f0155" in namespace "configmap-4829" to be "Succeeded or Failed" May 6 20:49:56.370: INFO: Pod "pod-configmaps-67e41ee0-5487-4484-92a3-3d9cfe5f0155": Phase="Pending", Reason="", readiness=false. Elapsed: 24.28825ms May 6 20:49:58.467: INFO: Pod "pod-configmaps-67e41ee0-5487-4484-92a3-3d9cfe5f0155": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12144536s May 6 20:50:00.538: INFO: Pod "pod-configmaps-67e41ee0-5487-4484-92a3-3d9cfe5f0155": Phase="Running", Reason="", readiness=true. Elapsed: 4.192031808s May 6 20:50:02.695: INFO: Pod "pod-configmaps-67e41ee0-5487-4484-92a3-3d9cfe5f0155": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.348759613s STEP: Saw pod success May 6 20:50:02.695: INFO: Pod "pod-configmaps-67e41ee0-5487-4484-92a3-3d9cfe5f0155" satisfied condition "Succeeded or Failed" May 6 20:50:02.698: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-67e41ee0-5487-4484-92a3-3d9cfe5f0155 container configmap-volume-test: STEP: delete the pod May 6 20:50:02.840: INFO: Waiting for pod pod-configmaps-67e41ee0-5487-4484-92a3-3d9cfe5f0155 to disappear May 6 20:50:02.878: INFO: Pod pod-configmaps-67e41ee0-5487-4484-92a3-3d9cfe5f0155 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:50:02.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4829" for this suite. • [SLOW TEST:6.646 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":186,"skipped":3028,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:50:02.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 20:50:03.278: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 6 20:50:06.198: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6534 create -f -' May 6 20:50:09.892: INFO: stderr: "" May 6 20:50:09.892: INFO: stdout: "e2e-test-crd-publish-openapi-1781-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 6 20:50:09.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6534 delete e2e-test-crd-publish-openapi-1781-crds test-cr' May 6 20:50:10.062: INFO: stderr: "" May 6 20:50:10.062: INFO: stdout: "e2e-test-crd-publish-openapi-1781-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 6 20:50:10.062: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6534 apply -f -' May 6 20:50:10.341: INFO: stderr: "" May 6 20:50:10.341: INFO: stdout: "e2e-test-crd-publish-openapi-1781-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 6 20:50:10.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6534 delete e2e-test-crd-publish-openapi-1781-crds test-cr' May 6 20:50:10.452: INFO: stderr: "" May 6 20:50:10.452: INFO: stdout: "e2e-test-crd-publish-openapi-1781-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 6 20:50:10.452: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1781-crds' May 6 20:50:10.720: INFO: stderr: "" May 6 20:50:10.720: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1781-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:50:13.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6534" for this suite. • [SLOW TEST:10.761 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":288,"completed":187,"skipped":3032,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:50:13.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-54237818-f4f7-4e51-b428-73fc2d1fea1f STEP: updating the pod May 6 20:50:22.259: INFO: Successfully updated pod "var-expansion-54237818-f4f7-4e51-b428-73fc2d1fea1f" STEP: waiting for pod and container restart STEP: Failing liveness probe May 6 20:50:22.287: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-8116 PodName:var-expansion-54237818-f4f7-4e51-b428-73fc2d1fea1f ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:50:22.287: INFO: >>> kubeConfig: /root/.kube/config I0506 20:50:22.323304 7 log.go:172] (0xc002d40160) (0xc001eb1540) Create stream I0506 20:50:22.323331 7 log.go:172] (0xc002d40160) (0xc001eb1540) Stream added, broadcasting: 1 I0506 20:50:22.325103 7 log.go:172] (0xc002d40160) Reply frame received for 1 I0506 20:50:22.325314 7 log.go:172] (0xc002d40160) (0xc0017d0f00) Create stream I0506 20:50:22.325324 7 log.go:172] (0xc002d40160) (0xc0017d0f00) Stream added, broadcasting: 3 I0506 20:50:22.326380 7 log.go:172] (0xc002d40160) Reply frame received for 3 I0506 20:50:22.326428 7 log.go:172] (0xc002d40160) (0xc000b740a0) Create stream I0506 20:50:22.326445 7 log.go:172] (0xc002d40160) (0xc000b740a0) Stream added, broadcasting: 5 I0506 20:50:22.327369 7 log.go:172] (0xc002d40160) Reply frame received for 5 I0506 20:50:22.384127 7 log.go:172] (0xc002d40160) Data frame received for 3 I0506 20:50:22.384160 7 log.go:172] (0xc0017d0f00) (3) Data frame handling I0506 20:50:22.384495 7 log.go:172] (0xc002d40160) Data frame received for 5 I0506 20:50:22.384558 7 log.go:172] (0xc000b740a0) (5) Data frame handling I0506 20:50:22.386512 7 log.go:172] (0xc002d40160) Data frame received for 1 I0506 20:50:22.386543 7 log.go:172] (0xc001eb1540) (1) Data frame handling I0506 20:50:22.386557 7 log.go:172] (0xc001eb1540) (1) Data frame sent I0506 20:50:22.386580 7 log.go:172] (0xc002d40160) (0xc001eb1540) Stream removed, broadcasting: 1 I0506 20:50:22.386604 7 log.go:172] (0xc002d40160) Go away received I0506 20:50:22.386730 7 log.go:172] (0xc002d40160) (0xc001eb1540) Stream removed, broadcasting: 1 I0506 20:50:22.386760 7 log.go:172] (0xc002d40160) (0xc0017d0f00) Stream removed, broadcasting: 3 I0506 20:50:22.386807 7 log.go:172] (0xc002d40160) (0xc000b740a0) Stream removed, broadcasting: 5 May 6 20:50:22.386: INFO: Pod exec output: / STEP: Waiting for container to restart May 6 20:50:22.391: INFO: Container dapi-container, restarts: 0 May 6 20:50:32.545: INFO: Container dapi-container, restarts: 0 May 6 20:50:42.395: INFO: Container dapi-container, restarts: 0 May 6 20:50:52.395: INFO: Container dapi-container, restarts: 0 May 6 20:51:02.395: INFO: Container dapi-container, restarts: 1 May 6 20:51:02.395: INFO: Container has restart count: 1 STEP: Rewriting the file May 6 20:51:02.398: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-8116 PodName:var-expansion-54237818-f4f7-4e51-b428-73fc2d1fea1f ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:51:02.398: INFO: >>> kubeConfig: /root/.kube/config I0506 20:51:02.427424 7 log.go:172] (0xc006164420) (0xc00058cf00) Create stream I0506 20:51:02.427450 7 log.go:172] (0xc006164420) (0xc00058cf00) Stream added, broadcasting: 1 I0506 20:51:02.429944 7 log.go:172] (0xc006164420) Reply frame received for 1 I0506 20:51:02.429991 7 log.go:172] (0xc006164420) (0xc000b75c20) Create stream I0506 20:51:02.430001 7 log.go:172] (0xc006164420) (0xc000b75c20) Stream added, broadcasting: 3 I0506 20:51:02.431136 7 log.go:172] (0xc006164420) Reply frame received for 3 I0506 20:51:02.431174 7 log.go:172] (0xc006164420) (0xc001eb1d60) Create stream I0506 20:51:02.431189 7 log.go:172] (0xc006164420) (0xc001eb1d60) Stream added, broadcasting: 5 I0506 20:51:02.432055 7 log.go:172] (0xc006164420) Reply frame received for 5 I0506 20:51:02.510377 7 log.go:172] (0xc006164420) Data frame received for 3 I0506 20:51:02.510409 7 log.go:172] (0xc000b75c20) (3) Data frame handling I0506 20:51:02.510443 7 log.go:172] (0xc006164420) Data frame received for 5 I0506 20:51:02.510463 7 log.go:172] (0xc001eb1d60) (5) Data frame handling I0506 20:51:02.511517 7 log.go:172] (0xc006164420) Data frame received for 1 I0506 20:51:02.511531 7 log.go:172] (0xc00058cf00) (1) Data frame handling I0506 20:51:02.511551 7 log.go:172] (0xc00058cf00) (1) Data frame sent I0506 20:51:02.511576 7 log.go:172] (0xc006164420) (0xc00058cf00) Stream removed, broadcasting: 1 I0506 20:51:02.511691 7 log.go:172] (0xc006164420) (0xc00058cf00) Stream removed, broadcasting: 1 I0506 20:51:02.511709 7 log.go:172] (0xc006164420) (0xc000b75c20) Stream removed, broadcasting: 3 I0506 20:51:02.511720 7 log.go:172] (0xc006164420) (0xc001eb1d60) Stream removed, broadcasting: 5 May 6 20:51:02.511: INFO: Pod exec output: STEP: Waiting for container to stop restarting I0506 20:51:02.511813 7 log.go:172] (0xc006164420) Go away received May 6 20:51:32.519: INFO: Container has restart count: 2 May 6 20:52:34.519: INFO: Container restart has stabilized STEP: test for subpath mounted with old value May 6 20:52:34.523: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-8116 PodName:var-expansion-54237818-f4f7-4e51-b428-73fc2d1fea1f ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:52:34.523: INFO: >>> kubeConfig: /root/.kube/config I0506 20:52:34.556003 7 log.go:172] (0xc00242a8f0) (0xc002baf220) Create stream I0506 20:52:34.556034 7 log.go:172] (0xc00242a8f0) (0xc002baf220) Stream added, broadcasting: 1 I0506 20:52:34.558169 7 log.go:172] (0xc00242a8f0) Reply frame received for 1 I0506 20:52:34.558238 7 log.go:172] (0xc00242a8f0) (0xc000b740a0) Create stream I0506 20:52:34.558255 7 log.go:172] (0xc00242a8f0) (0xc000b740a0) Stream added, broadcasting: 3 I0506 20:52:34.559103 7 log.go:172] (0xc00242a8f0) Reply frame received for 3 I0506 20:52:34.559129 7 log.go:172] (0xc00242a8f0) (0xc000b74460) Create stream I0506 20:52:34.559152 7 log.go:172] (0xc00242a8f0) (0xc000b74460) Stream added, broadcasting: 5 I0506 20:52:34.559924 7 log.go:172] (0xc00242a8f0) Reply frame received for 5 I0506 20:52:34.612552 7 log.go:172] (0xc00242a8f0) Data frame received for 5 I0506 20:52:34.612581 7 log.go:172] (0xc000b74460) (5) Data frame handling I0506 20:52:34.612604 7 log.go:172] (0xc00242a8f0) Data frame received for 3 I0506 20:52:34.612639 7 log.go:172] (0xc000b740a0) (3) Data frame handling I0506 20:52:34.614468 7 log.go:172] (0xc00242a8f0) Data frame received for 1 I0506 20:52:34.614495 7 log.go:172] (0xc002baf220) (1) Data frame handling I0506 20:52:34.614521 7 log.go:172] (0xc002baf220) (1) Data frame sent I0506 20:52:34.614542 7 log.go:172] (0xc00242a8f0) (0xc002baf220) Stream removed, broadcasting: 1 I0506 20:52:34.614575 7 log.go:172] (0xc00242a8f0) Go away received I0506 20:52:34.614740 7 log.go:172] (0xc00242a8f0) (0xc002baf220) Stream removed, broadcasting: 1 I0506 20:52:34.614762 7 log.go:172] (0xc00242a8f0) (0xc000b740a0) Stream removed, broadcasting: 3 I0506 20:52:34.614777 7 log.go:172] (0xc00242a8f0) (0xc000b74460) Stream removed, broadcasting: 5 May 6 20:52:34.995: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-8116 PodName:var-expansion-54237818-f4f7-4e51-b428-73fc2d1fea1f ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:52:34.995: INFO: >>> kubeConfig: /root/.kube/config I0506 20:52:35.023142 7 log.go:172] (0xc006164210) (0xc00058ca00) Create stream I0506 20:52:35.023187 7 log.go:172] (0xc006164210) (0xc00058ca00) Stream added, broadcasting: 1 I0506 20:52:35.033780 7 log.go:172] (0xc006164210) Reply frame received for 1 I0506 20:52:35.033822 7 log.go:172] (0xc006164210) (0xc0017d1180) Create stream I0506 20:52:35.033833 7 log.go:172] (0xc006164210) (0xc0017d1180) Stream added, broadcasting: 3 I0506 20:52:35.039334 7 log.go:172] (0xc006164210) Reply frame received for 3 I0506 20:52:35.039385 7 log.go:172] (0xc006164210) (0xc000b745a0) Create stream I0506 20:52:35.039403 7 log.go:172] (0xc006164210) (0xc000b745a0) Stream added, broadcasting: 5 I0506 20:52:35.040154 7 log.go:172] (0xc006164210) Reply frame received for 5 I0506 20:52:35.099000 7 log.go:172] (0xc006164210) Data frame received for 5 I0506 20:52:35.099091 7 log.go:172] (0xc000b745a0) (5) Data frame handling I0506 20:52:35.099139 7 log.go:172] (0xc006164210) Data frame received for 3 I0506 20:52:35.099164 7 log.go:172] (0xc0017d1180) (3) Data frame handling I0506 20:52:35.100413 7 log.go:172] (0xc006164210) Data frame received for 1 I0506 20:52:35.100431 7 log.go:172] (0xc00058ca00) (1) Data frame handling I0506 20:52:35.100451 7 log.go:172] (0xc00058ca00) (1) Data frame sent I0506 20:52:35.100468 7 log.go:172] (0xc006164210) (0xc00058ca00) Stream removed, broadcasting: 1 I0506 20:52:35.100524 7 log.go:172] (0xc006164210) Go away received I0506 20:52:35.100586 7 log.go:172] (0xc006164210) (0xc00058ca00) Stream removed, broadcasting: 1 I0506 20:52:35.100602 7 log.go:172] (0xc006164210) (0xc0017d1180) Stream removed, broadcasting: 3 I0506 20:52:35.100613 7 log.go:172] (0xc006164210) (0xc000b745a0) Stream removed, broadcasting: 5 May 6 20:52:35.100: INFO: Deleting pod "var-expansion-54237818-f4f7-4e51-b428-73fc2d1fea1f" in namespace "var-expansion-8116" May 6 20:52:35.106: INFO: Wait up to 5m0s for pod "var-expansion-54237818-f4f7-4e51-b428-73fc2d1fea1f" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:53:15.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8116" for this suite. • [SLOW TEST:181.516 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":288,"completed":188,"skipped":3059,"failed":0} SSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:53:15.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container May 6 20:53:19.972: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6903 pod-service-account-6eedcc4f-1270-4ce5-b678-e2611a01fa9b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 6 20:53:20.214: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6903 pod-service-account-6eedcc4f-1270-4ce5-b678-e2611a01fa9b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 6 20:53:20.501: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6903 pod-service-account-6eedcc4f-1270-4ce5-b678-e2611a01fa9b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:53:20.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6903" for this suite. • [SLOW TEST:5.682 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":288,"completed":189,"skipped":3065,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:53:20.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 6 20:53:21.703: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9874 /api/v1/namespaces/watch-9874/configmaps/e2e-watch-test-configmap-a 7a96713f-6020-4cf5-805d-b1a41a908bab 2098382 0 2020-05-06 20:53:21 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-06 20:53:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 6 20:53:21.703: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9874 /api/v1/namespaces/watch-9874/configmaps/e2e-watch-test-configmap-a 7a96713f-6020-4cf5-805d-b1a41a908bab 2098382 0 2020-05-06 20:53:21 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-06 20:53:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 6 20:53:31.712: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9874 /api/v1/namespaces/watch-9874/configmaps/e2e-watch-test-configmap-a 7a96713f-6020-4cf5-805d-b1a41a908bab 2098431 0 2020-05-06 20:53:21 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-06 20:53:31 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 6 20:53:31.713: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9874 /api/v1/namespaces/watch-9874/configmaps/e2e-watch-test-configmap-a 7a96713f-6020-4cf5-805d-b1a41a908bab 2098431 0 2020-05-06 20:53:21 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-06 20:53:31 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 6 20:53:41.722: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9874 /api/v1/namespaces/watch-9874/configmaps/e2e-watch-test-configmap-a 7a96713f-6020-4cf5-805d-b1a41a908bab 2098461 0 2020-05-06 20:53:21 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-06 20:53:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 6 20:53:41.722: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9874 /api/v1/namespaces/watch-9874/configmaps/e2e-watch-test-configmap-a 7a96713f-6020-4cf5-805d-b1a41a908bab 2098461 0 2020-05-06 20:53:21 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-06 20:53:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 6 20:53:51.729: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9874 /api/v1/namespaces/watch-9874/configmaps/e2e-watch-test-configmap-a 7a96713f-6020-4cf5-805d-b1a41a908bab 2098491 0 2020-05-06 20:53:21 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-06 20:53:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 6 20:53:51.729: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9874 /api/v1/namespaces/watch-9874/configmaps/e2e-watch-test-configmap-a 7a96713f-6020-4cf5-805d-b1a41a908bab 2098491 0 2020-05-06 20:53:21 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-06 20:53:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 6 20:54:01.813: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9874 /api/v1/namespaces/watch-9874/configmaps/e2e-watch-test-configmap-b aec8f212-8ba5-41ac-b5f5-ffa11eca7040 2098518 0 2020-05-06 20:54:01 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-06 20:54:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 6 20:54:01.813: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9874 /api/v1/namespaces/watch-9874/configmaps/e2e-watch-test-configmap-b aec8f212-8ba5-41ac-b5f5-ffa11eca7040 2098518 0 2020-05-06 20:54:01 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-06 20:54:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 6 20:54:11.818: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9874 /api/v1/namespaces/watch-9874/configmaps/e2e-watch-test-configmap-b aec8f212-8ba5-41ac-b5f5-ffa11eca7040 2098548 0 2020-05-06 20:54:01 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-06 20:54:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 6 20:54:11.818: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9874 /api/v1/namespaces/watch-9874/configmaps/e2e-watch-test-configmap-b aec8f212-8ba5-41ac-b5f5-ffa11eca7040 2098548 0 2020-05-06 20:54:01 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-06 20:54:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:54:21.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9874" for this suite. • [SLOW TEST:60.982 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":288,"completed":190,"skipped":3068,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:54:21.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2546 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2546 STEP: creating replication controller externalsvc in namespace services-2546 I0506 20:54:22.694192 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2546, replica count: 2 I0506 20:54:25.744574 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 20:54:28.744826 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 6 20:54:28.799: INFO: Creating new exec pod May 6 20:54:37.320: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2546 execpodzk262 -- /bin/sh -x -c nslookup nodeport-service' May 6 20:54:37.718: INFO: stderr: "I0506 20:54:37.633108 3050 log.go:172] (0xc000b57600) (0xc000b401e0) Create stream\nI0506 20:54:37.633318 3050 log.go:172] (0xc000b57600) (0xc000b401e0) Stream added, broadcasting: 1\nI0506 20:54:37.637881 3050 log.go:172] (0xc000b57600) Reply frame received for 1\nI0506 20:54:37.637921 3050 log.go:172] (0xc000b57600) (0xc000822f00) Create stream\nI0506 20:54:37.637933 3050 log.go:172] (0xc000b57600) (0xc000822f00) Stream added, broadcasting: 3\nI0506 20:54:37.638773 3050 log.go:172] (0xc000b57600) Reply frame received for 3\nI0506 20:54:37.638797 3050 log.go:172] (0xc000b57600) (0xc000818640) Create stream\nI0506 20:54:37.638805 3050 log.go:172] (0xc000b57600) (0xc000818640) Stream added, broadcasting: 5\nI0506 20:54:37.639681 3050 log.go:172] (0xc000b57600) Reply frame received for 5\nI0506 20:54:37.699585 3050 log.go:172] (0xc000b57600) Data frame received for 5\nI0506 20:54:37.699621 3050 log.go:172] (0xc000818640) (5) Data frame handling\nI0506 20:54:37.699646 3050 log.go:172] (0xc000818640) (5) Data frame sent\n+ nslookup nodeport-service\nI0506 20:54:37.707477 3050 log.go:172] (0xc000b57600) Data frame received for 3\nI0506 20:54:37.707500 3050 log.go:172] (0xc000822f00) (3) Data frame handling\nI0506 20:54:37.707519 3050 log.go:172] (0xc000822f00) (3) Data frame sent\nI0506 20:54:37.708459 3050 log.go:172] (0xc000b57600) Data frame received for 3\nI0506 20:54:37.708483 3050 log.go:172] (0xc000822f00) (3) Data frame handling\nI0506 20:54:37.708536 3050 log.go:172] (0xc000822f00) (3) Data frame sent\nI0506 20:54:37.709285 3050 log.go:172] (0xc000b57600) Data frame received for 3\nI0506 20:54:37.709303 3050 log.go:172] (0xc000822f00) (3) Data frame handling\nI0506 20:54:37.710027 3050 log.go:172] (0xc000b57600) Data frame received for 5\nI0506 20:54:37.710042 3050 log.go:172] (0xc000818640) (5) Data frame handling\nI0506 20:54:37.711259 3050 log.go:172] (0xc000b57600) Data frame received for 1\nI0506 20:54:37.711283 3050 log.go:172] (0xc000b401e0) (1) Data frame handling\nI0506 20:54:37.711297 3050 log.go:172] (0xc000b401e0) (1) Data frame sent\nI0506 20:54:37.711308 3050 log.go:172] (0xc000b57600) (0xc000b401e0) Stream removed, broadcasting: 1\nI0506 20:54:37.711639 3050 log.go:172] (0xc000b57600) (0xc000b401e0) Stream removed, broadcasting: 1\nI0506 20:54:37.711655 3050 log.go:172] (0xc000b57600) (0xc000822f00) Stream removed, broadcasting: 3\nI0506 20:54:37.711663 3050 log.go:172] (0xc000b57600) (0xc000818640) Stream removed, broadcasting: 5\n" May 6 20:54:37.718: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2546.svc.cluster.local\tcanonical name = externalsvc.services-2546.svc.cluster.local.\nName:\texternalsvc.services-2546.svc.cluster.local\nAddress: 10.105.203.38\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2546, will wait for the garbage collector to delete the pods May 6 20:54:37.777: INFO: Deleting ReplicationController externalsvc took: 5.729635ms May 6 20:54:38.478: INFO: Terminating ReplicationController externalsvc pods took: 700.252239ms May 6 20:54:45.626: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:54:45.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2546" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:23.870 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":288,"completed":191,"skipped":3071,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:54:45.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 6 20:54:47.063: INFO: Waiting up to 5m0s for pod "downwardapi-volume-07e8efb3-7021-4d65-8c81-170aa564d367" in namespace "downward-api-9081" to be "Succeeded or Failed" May 6 20:54:47.083: INFO: Pod "downwardapi-volume-07e8efb3-7021-4d65-8c81-170aa564d367": Phase="Pending", Reason="", readiness=false. Elapsed: 19.81642ms May 6 20:54:49.134: INFO: Pod "downwardapi-volume-07e8efb3-7021-4d65-8c81-170aa564d367": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071333728s May 6 20:54:51.170: INFO: Pod "downwardapi-volume-07e8efb3-7021-4d65-8c81-170aa564d367": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107161447s STEP: Saw pod success May 6 20:54:51.170: INFO: Pod "downwardapi-volume-07e8efb3-7021-4d65-8c81-170aa564d367" satisfied condition "Succeeded or Failed" May 6 20:54:51.174: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-07e8efb3-7021-4d65-8c81-170aa564d367 container client-container: STEP: delete the pod May 6 20:54:51.267: INFO: Waiting for pod downwardapi-volume-07e8efb3-7021-4d65-8c81-170aa564d367 to disappear May 6 20:54:51.295: INFO: Pod downwardapi-volume-07e8efb3-7021-4d65-8c81-170aa564d367 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:54:51.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9081" for this suite. • [SLOW TEST:5.600 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":192,"skipped":3086,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:54:51.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod May 6 20:54:55.636: INFO: Pod pod-hostip-fcc5b039-b401-4837-90e1-1034ec34872c has hostIP: 172.17.0.12 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:54:55.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5715" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":288,"completed":193,"skipped":3097,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:54:55.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-wftb STEP: Creating a pod to test atomic-volume-subpath May 6 20:54:55.775: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wftb" in namespace "subpath-169" to be "Succeeded or Failed" May 6 20:54:55.831: INFO: Pod "pod-subpath-test-configmap-wftb": Phase="Pending", Reason="", readiness=false. Elapsed: 56.277999ms May 6 20:54:57.834: INFO: Pod "pod-subpath-test-configmap-wftb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059615496s May 6 20:54:59.839: INFO: Pod "pod-subpath-test-configmap-wftb": Phase="Running", Reason="", readiness=true. Elapsed: 4.06385702s May 6 20:55:01.903: INFO: Pod "pod-subpath-test-configmap-wftb": Phase="Running", Reason="", readiness=true. Elapsed: 6.12786871s May 6 20:55:03.938: INFO: Pod "pod-subpath-test-configmap-wftb": Phase="Running", Reason="", readiness=true. Elapsed: 8.163368908s May 6 20:55:05.942: INFO: Pod "pod-subpath-test-configmap-wftb": Phase="Running", Reason="", readiness=true. Elapsed: 10.167472519s May 6 20:55:07.946: INFO: Pod "pod-subpath-test-configmap-wftb": Phase="Running", Reason="", readiness=true. Elapsed: 12.171753958s May 6 20:55:09.950: INFO: Pod "pod-subpath-test-configmap-wftb": Phase="Running", Reason="", readiness=true. Elapsed: 14.175307498s May 6 20:55:12.039: INFO: Pod "pod-subpath-test-configmap-wftb": Phase="Running", Reason="", readiness=true. Elapsed: 16.26460388s May 6 20:55:14.188: INFO: Pod "pod-subpath-test-configmap-wftb": Phase="Running", Reason="", readiness=true. Elapsed: 18.413462279s May 6 20:55:16.192: INFO: Pod "pod-subpath-test-configmap-wftb": Phase="Running", Reason="", readiness=true. Elapsed: 20.417630844s May 6 20:55:18.303: INFO: Pod "pod-subpath-test-configmap-wftb": Phase="Running", Reason="", readiness=true. Elapsed: 22.528128512s May 6 20:55:20.811: INFO: Pod "pod-subpath-test-configmap-wftb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.036444685s STEP: Saw pod success May 6 20:55:20.811: INFO: Pod "pod-subpath-test-configmap-wftb" satisfied condition "Succeeded or Failed" May 6 20:55:20.820: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-wftb container test-container-subpath-configmap-wftb: STEP: delete the pod May 6 20:55:21.131: INFO: Waiting for pod pod-subpath-test-configmap-wftb to disappear May 6 20:55:21.326: INFO: Pod pod-subpath-test-configmap-wftb no longer exists STEP: Deleting pod pod-subpath-test-configmap-wftb May 6 20:55:21.326: INFO: Deleting pod "pod-subpath-test-configmap-wftb" in namespace "subpath-169" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:55:21.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-169" for this suite. • [SLOW TEST:25.692 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":288,"completed":194,"skipped":3118,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:55:21.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 6 20:55:21.559: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c194f7a-88a1-47a6-82af-893848f214b6" in namespace "downward-api-2148" to be "Succeeded or Failed" May 6 20:55:21.626: INFO: Pod "downwardapi-volume-6c194f7a-88a1-47a6-82af-893848f214b6": Phase="Pending", Reason="", readiness=false. Elapsed: 67.241735ms May 6 20:55:23.746: INFO: Pod "downwardapi-volume-6c194f7a-88a1-47a6-82af-893848f214b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18704028s May 6 20:55:25.878: INFO: Pod "downwardapi-volume-6c194f7a-88a1-47a6-82af-893848f214b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318969s May 6 20:55:27.882: INFO: Pod "downwardapi-volume-6c194f7a-88a1-47a6-82af-893848f214b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.323748207s STEP: Saw pod success May 6 20:55:27.882: INFO: Pod "downwardapi-volume-6c194f7a-88a1-47a6-82af-893848f214b6" satisfied condition "Succeeded or Failed" May 6 20:55:27.886: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-6c194f7a-88a1-47a6-82af-893848f214b6 container client-container: STEP: delete the pod May 6 20:55:27.908: INFO: Waiting for pod downwardapi-volume-6c194f7a-88a1-47a6-82af-893848f214b6 to disappear May 6 20:55:27.955: INFO: Pod downwardapi-volume-6c194f7a-88a1-47a6-82af-893848f214b6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:55:27.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2148" for this suite. • [SLOW TEST:6.627 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":195,"skipped":3154,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:55:27.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 6 20:55:28.054: INFO: Waiting up to 5m0s for pod "downward-api-6523ae09-6cf9-4be5-a2b0-201ece1146c9" in namespace "downward-api-2143" to be "Succeeded or Failed" May 6 20:55:28.080: INFO: Pod "downward-api-6523ae09-6cf9-4be5-a2b0-201ece1146c9": Phase="Pending", Reason="", readiness=false. Elapsed: 26.05695ms May 6 20:55:30.084: INFO: Pod "downward-api-6523ae09-6cf9-4be5-a2b0-201ece1146c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030043341s May 6 20:55:32.088: INFO: Pod "downward-api-6523ae09-6cf9-4be5-a2b0-201ece1146c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033989089s STEP: Saw pod success May 6 20:55:32.088: INFO: Pod "downward-api-6523ae09-6cf9-4be5-a2b0-201ece1146c9" satisfied condition "Succeeded or Failed" May 6 20:55:32.091: INFO: Trying to get logs from node latest-worker pod downward-api-6523ae09-6cf9-4be5-a2b0-201ece1146c9 container dapi-container: STEP: delete the pod May 6 20:55:32.136: INFO: Waiting for pod downward-api-6523ae09-6cf9-4be5-a2b0-201ece1146c9 to disappear May 6 20:55:32.170: INFO: Pod downward-api-6523ae09-6cf9-4be5-a2b0-201ece1146c9 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:55:32.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2143" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":288,"completed":196,"skipped":3166,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:55:32.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 6 20:55:40.468: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 20:55:40.495: INFO: Pod pod-with-prestop-http-hook still exists May 6 20:55:42.495: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 20:55:42.499: INFO: Pod pod-with-prestop-http-hook still exists May 6 20:55:44.495: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 20:55:44.500: INFO: Pod pod-with-prestop-http-hook still exists May 6 20:55:46.495: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 20:55:46.783: INFO: Pod pod-with-prestop-http-hook still exists May 6 20:55:48.495: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 20:55:48.578: INFO: Pod pod-with-prestop-http-hook still exists May 6 20:55:50.495: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 20:55:50.542: INFO: Pod pod-with-prestop-http-hook still exists May 6 20:55:52.495: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 20:55:52.506: INFO: Pod pod-with-prestop-http-hook still exists May 6 20:55:54.495: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 20:55:54.597: INFO: Pod pod-with-prestop-http-hook still exists May 6 20:55:56.495: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 20:55:56.686: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:55:56.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5080" for this suite. • [SLOW TEST:24.738 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":288,"completed":197,"skipped":3176,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:55:56.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-80a62f0f-ea10-431d-8619-f2c4c66f9e49 STEP: Creating a pod to test consume secrets May 6 20:55:57.671: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-be42ebec-4a41-4cbb-be89-a77da598460a" in namespace "projected-3704" to be "Succeeded or Failed" May 6 20:55:57.997: INFO: Pod "pod-projected-secrets-be42ebec-4a41-4cbb-be89-a77da598460a": Phase="Pending", Reason="", readiness=false. Elapsed: 326.136712ms May 6 20:56:00.002: INFO: Pod "pod-projected-secrets-be42ebec-4a41-4cbb-be89-a77da598460a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.330783439s May 6 20:56:02.361: INFO: Pod "pod-projected-secrets-be42ebec-4a41-4cbb-be89-a77da598460a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.690513804s May 6 20:56:04.630: INFO: Pod "pod-projected-secrets-be42ebec-4a41-4cbb-be89-a77da598460a": Phase="Running", Reason="", readiness=true. Elapsed: 6.959306316s May 6 20:56:06.901: INFO: Pod "pod-projected-secrets-be42ebec-4a41-4cbb-be89-a77da598460a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.23021932s STEP: Saw pod success May 6 20:56:06.901: INFO: Pod "pod-projected-secrets-be42ebec-4a41-4cbb-be89-a77da598460a" satisfied condition "Succeeded or Failed" May 6 20:56:06.904: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-be42ebec-4a41-4cbb-be89-a77da598460a container projected-secret-volume-test: STEP: delete the pod May 6 20:56:07.103: INFO: Waiting for pod pod-projected-secrets-be42ebec-4a41-4cbb-be89-a77da598460a to disappear May 6 20:56:07.230: INFO: Pod pod-projected-secrets-be42ebec-4a41-4cbb-be89-a77da598460a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:56:07.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3704" for this suite. • [SLOW TEST:10.323 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":198,"skipped":3181,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:56:07.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 20:58:07.503: INFO: Deleting pod "var-expansion-8059ba8f-63f5-44cf-bfd9-5815ca882bf5" in namespace "var-expansion-5198" May 6 20:58:07.507: INFO: Wait up to 5m0s for pod "var-expansion-8059ba8f-63f5-44cf-bfd9-5815ca882bf5" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:58:11.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5198" for this suite. • [SLOW TEST:124.354 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":288,"completed":199,"skipped":3184,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:58:11.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 20:58:11.870: INFO: Create a RollingUpdate DaemonSet May 6 20:58:11.880: INFO: Check that daemon pods launch on every node of the cluster May 6 20:58:11.886: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:58:11.919: INFO: Number of nodes with available pods: 0 May 6 20:58:11.919: INFO: Node latest-worker is running more than one daemon pod May 6 20:58:13.005: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:58:13.008: INFO: Number of nodes with available pods: 0 May 6 20:58:13.008: INFO: Node latest-worker is running more than one daemon pod May 6 20:58:14.394: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:58:14.398: INFO: Number of nodes with available pods: 0 May 6 20:58:14.398: INFO: Node latest-worker is running more than one daemon pod May 6 20:58:14.923: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:58:14.925: INFO: Number of nodes with available pods: 0 May 6 20:58:14.925: INFO: Node latest-worker is running more than one daemon pod May 6 20:58:15.924: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:58:15.928: INFO: Number of nodes with available pods: 0 May 6 20:58:15.928: INFO: Node latest-worker is running more than one daemon pod May 6 20:58:16.945: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:58:16.975: INFO: Number of nodes with available pods: 2 May 6 20:58:16.975: INFO: Number of running nodes: 2, number of available pods: 2 May 6 20:58:16.975: INFO: Update the DaemonSet to trigger a rollout May 6 20:58:16.982: INFO: Updating DaemonSet daemon-set May 6 20:58:26.170: INFO: Roll back the DaemonSet before rollout is complete May 6 20:58:26.374: INFO: Updating DaemonSet daemon-set May 6 20:58:26.374: INFO: Make sure DaemonSet rollback is complete May 6 20:58:26.406: INFO: Wrong image for pod: daemon-set-gcll9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 6 20:58:26.406: INFO: Pod daemon-set-gcll9 is not available May 6 20:58:26.422: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:58:27.436: INFO: Wrong image for pod: daemon-set-gcll9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 6 20:58:27.436: INFO: Pod daemon-set-gcll9 is not available May 6 20:58:27.439: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:58:28.508: INFO: Wrong image for pod: daemon-set-gcll9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 6 20:58:28.508: INFO: Pod daemon-set-gcll9 is not available May 6 20:58:28.579: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 20:58:29.426: INFO: Pod daemon-set-4hgzj is not available May 6 20:58:29.430: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7254, will wait for the garbage collector to delete the pods May 6 20:58:29.494: INFO: Deleting DaemonSet.extensions daemon-set took: 6.447996ms May 6 20:58:29.894: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.3147ms May 6 20:58:34.902: INFO: Number of nodes with available pods: 0 May 6 20:58:34.902: INFO: Number of running nodes: 0, number of available pods: 0 May 6 20:58:34.904: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7254/daemonsets","resourceVersion":"2099699"},"items":null} May 6 20:58:34.906: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7254/pods","resourceVersion":"2099699"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:58:34.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7254" for this suite. • [SLOW TEST:23.327 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":288,"completed":200,"skipped":3186,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:58:34.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 20:58:35.224: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:58:42.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-640" for this suite. • [SLOW TEST:7.467 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":288,"completed":201,"skipped":3227,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:58:42.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-784 STEP: creating a selector STEP: Creating the service pods in kubernetes May 6 20:58:42.870: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 6 20:58:43.269: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 20:58:45.376: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 20:58:47.273: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 20:58:49.273: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:58:51.272: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:58:53.276: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:58:55.279: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:58:57.272: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:58:59.274: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:59:01.273: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:59:03.274: INFO: The status of Pod netserver-0 is Running (Ready = true) May 6 20:59:03.280: INFO: The status of Pod netserver-1 is Running (Ready = false) May 6 20:59:05.285: INFO: The status of Pod netserver-1 is Running (Ready = false) May 6 20:59:07.285: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 6 20:59:11.335: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.2:8080/dial?request=hostname&protocol=http&host=10.244.1.174&port=8080&tries=1'] Namespace:pod-network-test-784 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:59:11.335: INFO: >>> kubeConfig: /root/.kube/config I0506 20:59:11.367093 7 log.go:172] (0xc0027236b0) (0xc001427b80) Create stream I0506 20:59:11.367122 7 log.go:172] (0xc0027236b0) (0xc001427b80) Stream added, broadcasting: 1 I0506 20:59:11.368997 7 log.go:172] (0xc0027236b0) Reply frame received for 1 I0506 20:59:11.369073 7 log.go:172] (0xc0027236b0) (0xc001427c20) Create stream I0506 20:59:11.369095 7 log.go:172] (0xc0027236b0) (0xc001427c20) Stream added, broadcasting: 3 I0506 20:59:11.370299 7 log.go:172] (0xc0027236b0) Reply frame received for 3 I0506 20:59:11.370335 7 log.go:172] (0xc0027236b0) (0xc001427ea0) Create stream I0506 20:59:11.370352 7 log.go:172] (0xc0027236b0) (0xc001427ea0) Stream added, broadcasting: 5 I0506 20:59:11.377571 7 log.go:172] (0xc0027236b0) Reply frame received for 5 I0506 20:59:11.470661 7 log.go:172] (0xc0027236b0) Data frame received for 3 I0506 20:59:11.470693 7 log.go:172] (0xc001427c20) (3) Data frame handling I0506 20:59:11.470707 7 log.go:172] (0xc001427c20) (3) Data frame sent I0506 20:59:11.471128 7 log.go:172] (0xc0027236b0) Data frame received for 3 I0506 20:59:11.471153 7 log.go:172] (0xc001427c20) (3) Data frame handling I0506 20:59:11.471261 7 log.go:172] (0xc0027236b0) Data frame received for 5 I0506 20:59:11.471279 7 log.go:172] (0xc001427ea0) (5) Data frame handling I0506 20:59:11.473024 7 log.go:172] (0xc0027236b0) Data frame received for 1 I0506 20:59:11.473041 7 log.go:172] (0xc001427b80) (1) Data frame handling I0506 20:59:11.473053 7 log.go:172] (0xc001427b80) (1) Data frame sent I0506 20:59:11.473060 7 log.go:172] (0xc0027236b0) (0xc001427b80) Stream removed, broadcasting: 1 I0506 20:59:11.473069 7 log.go:172] (0xc0027236b0) Go away received I0506 20:59:11.473398 7 log.go:172] (0xc0027236b0) (0xc001427b80) Stream removed, broadcasting: 1 I0506 20:59:11.473420 7 log.go:172] (0xc0027236b0) (0xc001427c20) Stream removed, broadcasting: 3 I0506 20:59:11.473429 7 log.go:172] (0xc0027236b0) (0xc001427ea0) Stream removed, broadcasting: 5 May 6 20:59:11.473: INFO: Waiting for responses: map[] May 6 20:59:11.476: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.2:8080/dial?request=hostname&protocol=http&host=10.244.2.254&port=8080&tries=1'] Namespace:pod-network-test-784 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:59:11.476: INFO: >>> kubeConfig: /root/.kube/config I0506 20:59:11.502416 7 log.go:172] (0xc002f26dc0) (0xc0029a5680) Create stream I0506 20:59:11.502461 7 log.go:172] (0xc002f26dc0) (0xc0029a5680) Stream added, broadcasting: 1 I0506 20:59:11.504583 7 log.go:172] (0xc002f26dc0) Reply frame received for 1 I0506 20:59:11.504620 7 log.go:172] (0xc002f26dc0) (0xc0029a57c0) Create stream I0506 20:59:11.504635 7 log.go:172] (0xc002f26dc0) (0xc0029a57c0) Stream added, broadcasting: 3 I0506 20:59:11.506090 7 log.go:172] (0xc002f26dc0) Reply frame received for 3 I0506 20:59:11.506135 7 log.go:172] (0xc002f26dc0) (0xc002a80000) Create stream I0506 20:59:11.506150 7 log.go:172] (0xc002f26dc0) (0xc002a80000) Stream added, broadcasting: 5 I0506 20:59:11.507182 7 log.go:172] (0xc002f26dc0) Reply frame received for 5 I0506 20:59:11.566842 7 log.go:172] (0xc002f26dc0) Data frame received for 3 I0506 20:59:11.566875 7 log.go:172] (0xc0029a57c0) (3) Data frame handling I0506 20:59:11.566897 7 log.go:172] (0xc0029a57c0) (3) Data frame sent I0506 20:59:11.567317 7 log.go:172] (0xc002f26dc0) Data frame received for 3 I0506 20:59:11.567333 7 log.go:172] (0xc0029a57c0) (3) Data frame handling I0506 20:59:11.567809 7 log.go:172] (0xc002f26dc0) Data frame received for 5 I0506 20:59:11.567826 7 log.go:172] (0xc002a80000) (5) Data frame handling I0506 20:59:11.570492 7 log.go:172] (0xc002f26dc0) Data frame received for 1 I0506 20:59:11.570515 7 log.go:172] (0xc0029a5680) (1) Data frame handling I0506 20:59:11.570533 7 log.go:172] (0xc0029a5680) (1) Data frame sent I0506 20:59:11.570550 7 log.go:172] (0xc002f26dc0) (0xc0029a5680) Stream removed, broadcasting: 1 I0506 20:59:11.570568 7 log.go:172] (0xc002f26dc0) Go away received I0506 20:59:11.570680 7 log.go:172] (0xc002f26dc0) (0xc0029a5680) Stream removed, broadcasting: 1 I0506 20:59:11.570717 7 log.go:172] (0xc002f26dc0) (0xc0029a57c0) Stream removed, broadcasting: 3 I0506 20:59:11.570747 7 log.go:172] (0xc002f26dc0) (0xc002a80000) Stream removed, broadcasting: 5 May 6 20:59:11.570: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:59:11.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-784" for this suite. • [SLOW TEST:29.189 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":288,"completed":202,"skipped":3238,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:59:11.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0506 20:59:13.043173 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 20:59:13.043: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:59:13.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7389" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":288,"completed":203,"skipped":3247,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:59:13.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:59:21.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8670" for this suite. • [SLOW TEST:9.295 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":288,"completed":204,"skipped":3252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:59:22.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs May 6 20:59:23.438: INFO: Waiting up to 5m0s for pod "pod-c05d8d04-2888-4980-bced-4ee3294ea095" in namespace "emptydir-3260" to be "Succeeded or Failed" May 6 20:59:23.480: INFO: Pod "pod-c05d8d04-2888-4980-bced-4ee3294ea095": Phase="Pending", Reason="", readiness=false. Elapsed: 41.688554ms May 6 20:59:25.647: INFO: Pod "pod-c05d8d04-2888-4980-bced-4ee3294ea095": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209058414s May 6 20:59:27.650: INFO: Pod "pod-c05d8d04-2888-4980-bced-4ee3294ea095": Phase="Running", Reason="", readiness=true. Elapsed: 4.212045155s May 6 20:59:29.655: INFO: Pod "pod-c05d8d04-2888-4980-bced-4ee3294ea095": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.216399494s STEP: Saw pod success May 6 20:59:29.655: INFO: Pod "pod-c05d8d04-2888-4980-bced-4ee3294ea095" satisfied condition "Succeeded or Failed" May 6 20:59:29.660: INFO: Trying to get logs from node latest-worker pod pod-c05d8d04-2888-4980-bced-4ee3294ea095 container test-container: STEP: delete the pod May 6 20:59:29.709: INFO: Waiting for pod pod-c05d8d04-2888-4980-bced-4ee3294ea095 to disappear May 6 20:59:29.717: INFO: Pod pod-c05d8d04-2888-4980-bced-4ee3294ea095 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:59:29.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3260" for this suite. • [SLOW TEST:7.380 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":205,"skipped":3280,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:59:29.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:59:29.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4578" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":288,"completed":206,"skipped":3284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:59:29.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-1561 STEP: creating a selector STEP: Creating the service pods in kubernetes May 6 20:59:29.952: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 6 20:59:30.057: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 20:59:32.079: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 20:59:34.062: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 20:59:36.222: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:59:38.061: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:59:40.239: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:59:42.061: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:59:44.279: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:59:46.233: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 20:59:48.251: INFO: The status of Pod netserver-0 is Running (Ready = true) May 6 20:59:48.256: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 6 20:59:54.476: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.179 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1561 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:59:54.476: INFO: >>> kubeConfig: /root/.kube/config I0506 20:59:54.515464 7 log.go:172] (0xc002f73290) (0xc002023360) Create stream I0506 20:59:54.515497 7 log.go:172] (0xc002f73290) (0xc002023360) Stream added, broadcasting: 1 I0506 20:59:54.517929 7 log.go:172] (0xc002f73290) Reply frame received for 1 I0506 20:59:54.517974 7 log.go:172] (0xc002f73290) (0xc002023400) Create stream I0506 20:59:54.517985 7 log.go:172] (0xc002f73290) (0xc002023400) Stream added, broadcasting: 3 I0506 20:59:54.518983 7 log.go:172] (0xc002f73290) Reply frame received for 3 I0506 20:59:54.519021 7 log.go:172] (0xc002f73290) (0xc002a114a0) Create stream I0506 20:59:54.519045 7 log.go:172] (0xc002f73290) (0xc002a114a0) Stream added, broadcasting: 5 I0506 20:59:54.520164 7 log.go:172] (0xc002f73290) Reply frame received for 5 I0506 20:59:55.618920 7 log.go:172] (0xc002f73290) Data frame received for 5 I0506 20:59:55.618957 7 log.go:172] (0xc002a114a0) (5) Data frame handling I0506 20:59:55.618972 7 log.go:172] (0xc002f73290) Data frame received for 3 I0506 20:59:55.618980 7 log.go:172] (0xc002023400) (3) Data frame handling I0506 20:59:55.618988 7 log.go:172] (0xc002023400) (3) Data frame sent I0506 20:59:55.618995 7 log.go:172] (0xc002f73290) Data frame received for 3 I0506 20:59:55.619001 7 log.go:172] (0xc002023400) (3) Data frame handling I0506 20:59:55.620436 7 log.go:172] (0xc002f73290) Data frame received for 1 I0506 20:59:55.620452 7 log.go:172] (0xc002023360) (1) Data frame handling I0506 20:59:55.620461 7 log.go:172] (0xc002023360) (1) Data frame sent I0506 20:59:55.620481 7 log.go:172] (0xc002f73290) (0xc002023360) Stream removed, broadcasting: 1 I0506 20:59:55.620495 7 log.go:172] (0xc002f73290) Go away received I0506 20:59:55.620580 7 log.go:172] (0xc002f73290) (0xc002023360) Stream removed, broadcasting: 1 I0506 20:59:55.620599 7 log.go:172] (0xc002f73290) (0xc002023400) Stream removed, broadcasting: 3 I0506 20:59:55.620607 7 log.go:172] (0xc002f73290) (0xc002a114a0) Stream removed, broadcasting: 5 May 6 20:59:55.620: INFO: Found all expected endpoints: [netserver-0] May 6 20:59:55.623: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1561 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 20:59:55.623: INFO: >>> kubeConfig: /root/.kube/config I0506 20:59:55.653928 7 log.go:172] (0xc002f73810) (0xc002023900) Create stream I0506 20:59:55.653961 7 log.go:172] (0xc002f73810) (0xc002023900) Stream added, broadcasting: 1 I0506 20:59:55.655914 7 log.go:172] (0xc002f73810) Reply frame received for 1 I0506 20:59:55.655961 7 log.go:172] (0xc002f73810) (0xc00179ba40) Create stream I0506 20:59:55.655977 7 log.go:172] (0xc002f73810) (0xc00179ba40) Stream added, broadcasting: 3 I0506 20:59:55.657025 7 log.go:172] (0xc002f73810) Reply frame received for 3 I0506 20:59:55.657077 7 log.go:172] (0xc002f73810) (0xc0020239a0) Create stream I0506 20:59:55.657089 7 log.go:172] (0xc002f73810) (0xc0020239a0) Stream added, broadcasting: 5 I0506 20:59:55.658008 7 log.go:172] (0xc002f73810) Reply frame received for 5 I0506 20:59:56.732534 7 log.go:172] (0xc002f73810) Data frame received for 3 I0506 20:59:56.732580 7 log.go:172] (0xc00179ba40) (3) Data frame handling I0506 20:59:56.732701 7 log.go:172] (0xc00179ba40) (3) Data frame sent I0506 20:59:56.732783 7 log.go:172] (0xc002f73810) Data frame received for 3 I0506 20:59:56.733052 7 log.go:172] (0xc00179ba40) (3) Data frame handling I0506 20:59:56.733103 7 log.go:172] (0xc002f73810) Data frame received for 5 I0506 20:59:56.733349 7 log.go:172] (0xc0020239a0) (5) Data frame handling I0506 20:59:56.735426 7 log.go:172] (0xc002f73810) Data frame received for 1 I0506 20:59:56.735460 7 log.go:172] (0xc002023900) (1) Data frame handling I0506 20:59:56.735484 7 log.go:172] (0xc002023900) (1) Data frame sent I0506 20:59:56.735560 7 log.go:172] (0xc002f73810) (0xc002023900) Stream removed, broadcasting: 1 I0506 20:59:56.735684 7 log.go:172] (0xc002f73810) Go away received I0506 20:59:56.735744 7 log.go:172] (0xc002f73810) (0xc002023900) Stream removed, broadcasting: 1 I0506 20:59:56.735781 7 log.go:172] (0xc002f73810) (0xc00179ba40) Stream removed, broadcasting: 3 I0506 20:59:56.735809 7 log.go:172] (0xc002f73810) (0xc0020239a0) Stream removed, broadcasting: 5 May 6 20:59:56.735: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 20:59:56.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1561" for this suite. • [SLOW TEST:26.893 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":207,"skipped":3336,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 20:59:56.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 20:59:56.810: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 6 21:00:01.824: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 6 21:00:01.824: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 6 21:00:03.906: INFO: Creating deployment "test-rollover-deployment" May 6 21:00:04.132: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 6 21:00:06.245: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 6 21:00:06.451: INFO: Ensure that both replica sets have 1 created replica May 6 21:00:06.655: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 6 21:00:06.661: INFO: Updating deployment test-rollover-deployment May 6 21:00:06.661: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 6 21:00:09.191: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 6 21:00:09.195: INFO: Make sure deployment "test-rollover-deployment" is complete May 6 21:00:09.200: INFO: all replica sets need to contain the pod-template-hash label May 6 21:00:09.200: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395608, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 21:00:11.242: INFO: all replica sets need to contain the pod-template-hash label May 6 21:00:11.242: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395608, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 21:00:13.219: INFO: all replica sets need to contain the pod-template-hash label May 6 21:00:13.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395612, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 21:00:15.210: INFO: all replica sets need to contain the pod-template-hash label May 6 21:00:15.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395612, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 21:00:17.209: INFO: all replica sets need to contain the pod-template-hash label May 6 21:00:17.209: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395612, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 21:00:19.260: INFO: all replica sets need to contain the pod-template-hash label May 6 21:00:19.260: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395612, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 21:00:21.208: INFO: all replica sets need to contain the pod-template-hash label May 6 21:00:21.208: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395612, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 21:00:23.412: INFO: May 6 21:00:23.412: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395622, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395604, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 21:00:25.209: INFO: May 6 21:00:25.209: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 6 21:00:25.218: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-1012 /apis/apps/v1/namespaces/deployment-1012/deployments/test-rollover-deployment 93a3f299-428f-4668-98f8-6cb4e75f7670 2100405 2 2020-05-06 21:00:03 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-06 21:00:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-06 21:00:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0043dcf98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-06 21:00:04 +0000 UTC,LastTransitionTime:2020-05-06 21:00:04 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-05-06 21:00:23 +0000 UTC,LastTransitionTime:2020-05-06 21:00:04 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 6 21:00:25.246: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-1012 /apis/apps/v1/namespaces/deployment-1012/replicasets/test-rollover-deployment-7c4fd9c879 e99c6cfd-93d7-4aea-b03a-494465c21c6f 2100389 2 2020-05-06 21:00:06 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 93a3f299-428f-4668-98f8-6cb4e75f7670 0xc0037f8a67 0xc0037f8a68}] [] [{kube-controller-manager Update apps/v1 2020-05-06 21:00:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93a3f299-428f-4668-98f8-6cb4e75f7670\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0037f8b08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 6 21:00:25.246: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 6 21:00:25.246: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-1012 /apis/apps/v1/namespaces/deployment-1012/replicasets/test-rollover-controller 712822b2-c5a7-4d2a-8027-9aef6a441139 2100404 2 2020-05-06 20:59:56 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 93a3f299-428f-4668-98f8-6cb4e75f7670 0xc0037f8857 0xc0037f8858}] [] [{e2e.test Update apps/v1 2020-05-06 20:59:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-06 21:00:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93a3f299-428f-4668-98f8-6cb4e75f7670\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0037f88f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 21:00:25.246: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-1012 /apis/apps/v1/namespaces/deployment-1012/replicasets/test-rollover-deployment-5686c4cfd5 28319d63-1335-44cf-9340-95c68e2889a6 2100346 2 2020-05-06 21:00:04 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 93a3f299-428f-4668-98f8-6cb4e75f7670 0xc0037f8967 0xc0037f8968}] [] [{kube-controller-manager Update apps/v1 2020-05-06 21:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93a3f299-428f-4668-98f8-6cb4e75f7670\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0037f89f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 21:00:25.251: INFO: Pod "test-rollover-deployment-7c4fd9c879-gvrz4" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-gvrz4 test-rollover-deployment-7c4fd9c879- deployment-1012 /api/v1/namespaces/deployment-1012/pods/test-rollover-deployment-7c4fd9c879-gvrz4 d5e5972d-e39d-4b30-a2e8-33914a2a1684 2100359 0 2020-05-06 21:00:08 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 e99c6cfd-93d7-4aea-b03a-494465c21c6f 0xc0037f90c7 0xc0037f90c8}] [] [{kube-controller-manager Update v1 2020-05-06 21:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e99c6cfd-93d7-4aea-b03a-494465c21c6f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-06 21:00:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.6\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-khwbq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-khwbq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-khwbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 21:00:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 21:00:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 21:00:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 21:00:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.6,StartTime:2020-05-06 21:00:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 21:00:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://92c0d72cfc6ee0a029e035873edbdb4b4fec94defda5c5b1b6e5cb9e465c820b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:00:25.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1012" for this suite. • [SLOW TEST:28.513 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":288,"completed":208,"skipped":3357,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:00:25.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 21:00:25.323: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:00:26.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-637" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":288,"completed":209,"skipped":3364,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:00:26.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-40c8a8a2-e2bb-4b5d-9005-c7ba774f145b STEP: Creating a pod to test consume secrets May 6 21:00:26.994: INFO: Waiting up to 5m0s for pod "pod-secrets-eb4e3af3-6995-4b46-aed6-ea7a3b287dce" in namespace "secrets-5052" to be "Succeeded or Failed" May 6 21:00:27.015: INFO: Pod "pod-secrets-eb4e3af3-6995-4b46-aed6-ea7a3b287dce": Phase="Pending", Reason="", readiness=false. Elapsed: 21.115113ms May 6 21:00:29.020: INFO: Pod "pod-secrets-eb4e3af3-6995-4b46-aed6-ea7a3b287dce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025729104s May 6 21:00:31.070: INFO: Pod "pod-secrets-eb4e3af3-6995-4b46-aed6-ea7a3b287dce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07538505s STEP: Saw pod success May 6 21:00:31.070: INFO: Pod "pod-secrets-eb4e3af3-6995-4b46-aed6-ea7a3b287dce" satisfied condition "Succeeded or Failed" May 6 21:00:31.087: INFO: Trying to get logs from node latest-worker pod pod-secrets-eb4e3af3-6995-4b46-aed6-ea7a3b287dce container secret-volume-test: STEP: delete the pod May 6 21:00:31.177: INFO: Waiting for pod pod-secrets-eb4e3af3-6995-4b46-aed6-ea7a3b287dce to disappear May 6 21:00:31.182: INFO: Pod pod-secrets-eb4e3af3-6995-4b46-aed6-ea7a3b287dce no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:00:31.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5052" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":210,"skipped":3385,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:00:31.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 21:00:32.266: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 21:00:34.761: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395632, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395632, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395632, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395632, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 21:00:37.887: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:00:38.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2719" for this suite. STEP: Destroying namespace "webhook-2719-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.163 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":288,"completed":211,"skipped":3416,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:00:38.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5995 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-5995 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5995 May 6 21:00:39.182: INFO: Found 0 stateful pods, waiting for 1 May 6 21:00:49.187: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 6 21:00:49.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5995 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 21:00:53.150: INFO: stderr: "I0506 21:00:53.003678 3071 log.go:172] (0xc00041a840) (0xc000656960) Create stream\nI0506 21:00:53.003722 3071 log.go:172] (0xc00041a840) (0xc000656960) Stream added, broadcasting: 1\nI0506 21:00:53.006671 3071 log.go:172] (0xc00041a840) Reply frame received for 1\nI0506 21:00:53.006723 3071 log.go:172] (0xc00041a840) (0xc000640be0) Create stream\nI0506 21:00:53.006736 3071 log.go:172] (0xc00041a840) (0xc000640be0) Stream added, broadcasting: 3\nI0506 21:00:53.007605 3071 log.go:172] (0xc00041a840) Reply frame received for 3\nI0506 21:00:53.007658 3071 log.go:172] (0xc00041a840) (0xc000638460) Create stream\nI0506 21:00:53.007680 3071 log.go:172] (0xc00041a840) (0xc000638460) Stream added, broadcasting: 5\nI0506 21:00:53.008399 3071 log.go:172] (0xc00041a840) Reply frame received for 5\nI0506 21:00:53.100591 3071 log.go:172] (0xc00041a840) Data frame received for 5\nI0506 21:00:53.100637 3071 log.go:172] (0xc000638460) (5) Data frame handling\nI0506 21:00:53.100674 3071 log.go:172] (0xc000638460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 21:00:53.139372 3071 log.go:172] (0xc00041a840) Data frame received for 5\nI0506 21:00:53.139434 3071 log.go:172] (0xc000638460) (5) Data frame handling\nI0506 21:00:53.139478 3071 log.go:172] (0xc00041a840) Data frame received for 3\nI0506 21:00:53.139522 3071 log.go:172] (0xc000640be0) (3) Data frame handling\nI0506 21:00:53.139556 3071 log.go:172] (0xc000640be0) (3) Data frame sent\nI0506 21:00:53.139573 3071 log.go:172] (0xc00041a840) Data frame received for 3\nI0506 21:00:53.139583 3071 log.go:172] (0xc000640be0) (3) Data frame handling\nI0506 21:00:53.141709 3071 log.go:172] (0xc00041a840) Data frame received for 1\nI0506 21:00:53.141723 3071 log.go:172] (0xc000656960) (1) Data frame handling\nI0506 21:00:53.141731 3071 log.go:172] (0xc000656960) (1) Data frame sent\nI0506 21:00:53.141941 3071 log.go:172] (0xc00041a840) (0xc000656960) Stream removed, broadcasting: 1\nI0506 21:00:53.141959 3071 log.go:172] (0xc00041a840) Go away received\nI0506 21:00:53.142478 3071 log.go:172] (0xc00041a840) (0xc000656960) Stream removed, broadcasting: 1\nI0506 21:00:53.142502 3071 log.go:172] (0xc00041a840) (0xc000640be0) Stream removed, broadcasting: 3\nI0506 21:00:53.142514 3071 log.go:172] (0xc00041a840) (0xc000638460) Stream removed, broadcasting: 5\n" May 6 21:00:53.151: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 21:00:53.151: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 21:00:53.154: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 6 21:01:03.234: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 21:01:03.234: INFO: Waiting for statefulset status.replicas updated to 0 May 6 21:01:03.255: INFO: POD NODE PHASE GRACE CONDITIONS May 6 21:01:03.255: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:00:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:00:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:00:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:00:39 +0000 UTC }] May 6 21:01:03.255: INFO: May 6 21:01:03.255: INFO: StatefulSet ss has not reached scale 3, at 1 May 6 21:01:04.294: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989422659s May 6 21:01:05.299: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.949827502s May 6 21:01:06.603: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.944928006s May 6 21:01:07.608: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.641601538s May 6 21:01:08.612: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.636441287s May 6 21:01:09.618: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.631904088s May 6 21:01:10.622: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.626476745s May 6 21:01:11.626: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.622405113s May 6 21:01:12.634: INFO: Verifying statefulset ss doesn't scale past 3 for another 617.650692ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5995 May 6 21:01:13.639: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5995 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 21:01:13.872: INFO: stderr: "I0506 21:01:13.774772 3102 log.go:172] (0xc00003b8c0) (0xc000a7c320) Create stream\nI0506 21:01:13.774823 3102 log.go:172] (0xc00003b8c0) (0xc000a7c320) Stream added, broadcasting: 1\nI0506 21:01:13.778940 3102 log.go:172] (0xc00003b8c0) Reply frame received for 1\nI0506 21:01:13.778995 3102 log.go:172] (0xc00003b8c0) (0xc0005c30e0) Create stream\nI0506 21:01:13.779012 3102 log.go:172] (0xc00003b8c0) (0xc0005c30e0) Stream added, broadcasting: 3\nI0506 21:01:13.779887 3102 log.go:172] (0xc00003b8c0) Reply frame received for 3\nI0506 21:01:13.779920 3102 log.go:172] (0xc00003b8c0) (0xc0003a0f00) Create stream\nI0506 21:01:13.779929 3102 log.go:172] (0xc00003b8c0) (0xc0003a0f00) Stream added, broadcasting: 5\nI0506 21:01:13.780782 3102 log.go:172] (0xc00003b8c0) Reply frame received for 5\nI0506 21:01:13.866710 3102 log.go:172] (0xc00003b8c0) Data frame received for 5\nI0506 21:01:13.866744 3102 log.go:172] (0xc0003a0f00) (5) Data frame handling\nI0506 21:01:13.866752 3102 log.go:172] (0xc0003a0f00) (5) Data frame sent\nI0506 21:01:13.866760 3102 log.go:172] (0xc00003b8c0) Data frame received for 5\nI0506 21:01:13.866770 3102 log.go:172] (0xc0003a0f00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 21:01:13.866787 3102 log.go:172] (0xc00003b8c0) Data frame received for 3\nI0506 21:01:13.866794 3102 log.go:172] (0xc0005c30e0) (3) Data frame handling\nI0506 21:01:13.866802 3102 log.go:172] (0xc0005c30e0) (3) Data frame sent\nI0506 21:01:13.866820 3102 log.go:172] (0xc00003b8c0) Data frame received for 3\nI0506 21:01:13.866828 3102 log.go:172] (0xc0005c30e0) (3) Data frame handling\nI0506 21:01:13.867728 3102 log.go:172] (0xc00003b8c0) Data frame received for 1\nI0506 21:01:13.867745 3102 log.go:172] (0xc000a7c320) (1) Data frame handling\nI0506 21:01:13.867753 3102 log.go:172] (0xc000a7c320) (1) Data frame sent\nI0506 21:01:13.867776 3102 log.go:172] (0xc00003b8c0) (0xc000a7c320) Stream removed, broadcasting: 1\nI0506 21:01:13.867835 3102 log.go:172] (0xc00003b8c0) Go away received\nI0506 21:01:13.868027 3102 log.go:172] (0xc00003b8c0) (0xc000a7c320) Stream removed, broadcasting: 1\nI0506 21:01:13.868039 3102 log.go:172] (0xc00003b8c0) (0xc0005c30e0) Stream removed, broadcasting: 3\nI0506 21:01:13.868045 3102 log.go:172] (0xc00003b8c0) (0xc0003a0f00) Stream removed, broadcasting: 5\n" May 6 21:01:13.872: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 21:01:13.872: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 21:01:13.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5995 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 21:01:14.078: INFO: stderr: "I0506 21:01:14.003233 3123 log.go:172] (0xc000c50f20) (0xc000375cc0) Create stream\nI0506 21:01:14.003354 3123 log.go:172] (0xc000c50f20) (0xc000375cc0) Stream added, broadcasting: 1\nI0506 21:01:14.005724 3123 log.go:172] (0xc000c50f20) Reply frame received for 1\nI0506 21:01:14.005752 3123 log.go:172] (0xc000c50f20) (0xc0002a20a0) Create stream\nI0506 21:01:14.005758 3123 log.go:172] (0xc000c50f20) (0xc0002a20a0) Stream added, broadcasting: 3\nI0506 21:01:14.006733 3123 log.go:172] (0xc000c50f20) Reply frame received for 3\nI0506 21:01:14.006757 3123 log.go:172] (0xc000c50f20) (0xc0004f6500) Create stream\nI0506 21:01:14.006765 3123 log.go:172] (0xc000c50f20) (0xc0004f6500) Stream added, broadcasting: 5\nI0506 21:01:14.007740 3123 log.go:172] (0xc000c50f20) Reply frame received for 5\nI0506 21:01:14.069668 3123 log.go:172] (0xc000c50f20) Data frame received for 3\nI0506 21:01:14.069705 3123 log.go:172] (0xc0002a20a0) (3) Data frame handling\nI0506 21:01:14.069844 3123 log.go:172] (0xc0002a20a0) (3) Data frame sent\nI0506 21:01:14.069874 3123 log.go:172] (0xc000c50f20) Data frame received for 3\nI0506 21:01:14.069898 3123 log.go:172] (0xc0002a20a0) (3) Data frame handling\nI0506 21:01:14.069943 3123 log.go:172] (0xc000c50f20) Data frame received for 5\nI0506 21:01:14.069992 3123 log.go:172] (0xc0004f6500) (5) Data frame handling\nI0506 21:01:14.070026 3123 log.go:172] (0xc0004f6500) (5) Data frame sent\nI0506 21:01:14.070051 3123 log.go:172] (0xc000c50f20) Data frame received for 5\nI0506 21:01:14.070069 3123 log.go:172] (0xc0004f6500) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0506 21:01:14.071584 3123 log.go:172] (0xc000c50f20) Data frame received for 1\nI0506 21:01:14.071615 3123 log.go:172] (0xc000375cc0) (1) Data frame handling\nI0506 21:01:14.071635 3123 log.go:172] (0xc000375cc0) (1) Data frame sent\nI0506 21:01:14.071670 3123 log.go:172] (0xc000c50f20) (0xc000375cc0) Stream removed, broadcasting: 1\nI0506 21:01:14.071711 3123 log.go:172] (0xc000c50f20) Go away received\nI0506 21:01:14.072376 3123 log.go:172] (0xc000c50f20) (0xc000375cc0) Stream removed, broadcasting: 1\nI0506 21:01:14.072399 3123 log.go:172] (0xc000c50f20) (0xc0002a20a0) Stream removed, broadcasting: 3\nI0506 21:01:14.072409 3123 log.go:172] (0xc000c50f20) (0xc0004f6500) Stream removed, broadcasting: 5\n" May 6 21:01:14.079: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 21:01:14.079: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 21:01:14.079: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5995 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 21:01:14.295: INFO: stderr: "I0506 21:01:14.214234 3143 log.go:172] (0xc000a5ac60) (0xc000852e60) Create stream\nI0506 21:01:14.214280 3143 log.go:172] (0xc000a5ac60) (0xc000852e60) Stream added, broadcasting: 1\nI0506 21:01:14.224307 3143 log.go:172] (0xc000a5ac60) Reply frame received for 1\nI0506 21:01:14.224357 3143 log.go:172] (0xc000a5ac60) (0xc0008494a0) Create stream\nI0506 21:01:14.224376 3143 log.go:172] (0xc000a5ac60) (0xc0008494a0) Stream added, broadcasting: 3\nI0506 21:01:14.225842 3143 log.go:172] (0xc000a5ac60) Reply frame received for 3\nI0506 21:01:14.225887 3143 log.go:172] (0xc000a5ac60) (0xc000840f00) Create stream\nI0506 21:01:14.225897 3143 log.go:172] (0xc000a5ac60) (0xc000840f00) Stream added, broadcasting: 5\nI0506 21:01:14.226918 3143 log.go:172] (0xc000a5ac60) Reply frame received for 5\nI0506 21:01:14.285920 3143 log.go:172] (0xc000a5ac60) Data frame received for 3\nI0506 21:01:14.285971 3143 log.go:172] (0xc000a5ac60) Data frame received for 5\nI0506 21:01:14.286006 3143 log.go:172] (0xc000840f00) (5) Data frame handling\nI0506 21:01:14.286054 3143 log.go:172] (0xc000840f00) (5) Data frame sent\nI0506 21:01:14.286069 3143 log.go:172] (0xc000a5ac60) Data frame received for 5\nI0506 21:01:14.286080 3143 log.go:172] (0xc000840f00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0506 21:01:14.286117 3143 log.go:172] (0xc0008494a0) (3) Data frame handling\nI0506 21:01:14.286161 3143 log.go:172] (0xc0008494a0) (3) Data frame sent\nI0506 21:01:14.286173 3143 log.go:172] (0xc000a5ac60) Data frame received for 3\nI0506 21:01:14.286181 3143 log.go:172] (0xc0008494a0) (3) Data frame handling\nI0506 21:01:14.288059 3143 log.go:172] (0xc000a5ac60) Data frame received for 1\nI0506 21:01:14.288076 3143 log.go:172] (0xc000852e60) (1) Data frame handling\nI0506 21:01:14.288085 3143 log.go:172] (0xc000852e60) (1) Data frame sent\nI0506 21:01:14.288097 3143 log.go:172] (0xc000a5ac60) (0xc000852e60) Stream removed, broadcasting: 1\nI0506 21:01:14.288113 3143 log.go:172] (0xc000a5ac60) Go away received\nI0506 21:01:14.288553 3143 log.go:172] (0xc000a5ac60) (0xc000852e60) Stream removed, broadcasting: 1\nI0506 21:01:14.288578 3143 log.go:172] (0xc000a5ac60) (0xc0008494a0) Stream removed, broadcasting: 3\nI0506 21:01:14.288589 3143 log.go:172] (0xc000a5ac60) (0xc000840f00) Stream removed, broadcasting: 5\n" May 6 21:01:14.295: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 21:01:14.295: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 21:01:14.299: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 6 21:01:24.305: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 6 21:01:24.305: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 6 21:01:24.305: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 6 21:01:24.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5995 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 21:01:24.507: INFO: stderr: "I0506 21:01:24.437315 3164 log.go:172] (0xc00003b760) (0xc000b72640) Create stream\nI0506 21:01:24.437358 3164 log.go:172] (0xc00003b760) (0xc000b72640) Stream added, broadcasting: 1\nI0506 21:01:24.441035 3164 log.go:172] (0xc00003b760) Reply frame received for 1\nI0506 21:01:24.441087 3164 log.go:172] (0xc00003b760) (0xc00052cdc0) Create stream\nI0506 21:01:24.441328 3164 log.go:172] (0xc00003b760) (0xc00052cdc0) Stream added, broadcasting: 3\nI0506 21:01:24.442481 3164 log.go:172] (0xc00003b760) Reply frame received for 3\nI0506 21:01:24.442554 3164 log.go:172] (0xc00003b760) (0xc0003d54a0) Create stream\nI0506 21:01:24.442572 3164 log.go:172] (0xc00003b760) (0xc0003d54a0) Stream added, broadcasting: 5\nI0506 21:01:24.443654 3164 log.go:172] (0xc00003b760) Reply frame received for 5\nI0506 21:01:24.500367 3164 log.go:172] (0xc00003b760) Data frame received for 3\nI0506 21:01:24.500405 3164 log.go:172] (0xc00052cdc0) (3) Data frame handling\nI0506 21:01:24.500417 3164 log.go:172] (0xc00052cdc0) (3) Data frame sent\nI0506 21:01:24.500427 3164 log.go:172] (0xc00003b760) Data frame received for 3\nI0506 21:01:24.500434 3164 log.go:172] (0xc00052cdc0) (3) Data frame handling\nI0506 21:01:24.500464 3164 log.go:172] (0xc00003b760) Data frame received for 5\nI0506 21:01:24.500474 3164 log.go:172] (0xc0003d54a0) (5) Data frame handling\nI0506 21:01:24.500482 3164 log.go:172] (0xc0003d54a0) (5) Data frame sent\nI0506 21:01:24.500490 3164 log.go:172] (0xc00003b760) Data frame received for 5\nI0506 21:01:24.500497 3164 log.go:172] (0xc0003d54a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 21:01:24.502649 3164 log.go:172] (0xc00003b760) Data frame received for 1\nI0506 21:01:24.502676 3164 log.go:172] (0xc000b72640) (1) Data frame handling\nI0506 21:01:24.502688 3164 log.go:172] (0xc000b72640) (1) Data frame sent\nI0506 21:01:24.502703 3164 log.go:172] (0xc00003b760) (0xc000b72640) Stream removed, broadcasting: 1\nI0506 21:01:24.502724 3164 log.go:172] (0xc00003b760) Go away received\nI0506 21:01:24.503074 3164 log.go:172] (0xc00003b760) (0xc000b72640) Stream removed, broadcasting: 1\nI0506 21:01:24.503100 3164 log.go:172] (0xc00003b760) (0xc00052cdc0) Stream removed, broadcasting: 3\nI0506 21:01:24.503116 3164 log.go:172] (0xc00003b760) (0xc0003d54a0) Stream removed, broadcasting: 5\n" May 6 21:01:24.507: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 21:01:24.507: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 21:01:24.507: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5995 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 21:01:24.763: INFO: stderr: "I0506 21:01:24.639283 3184 log.go:172] (0xc000b873f0) (0xc000ba61e0) Create stream\nI0506 21:01:24.639340 3184 log.go:172] (0xc000b873f0) (0xc000ba61e0) Stream added, broadcasting: 1\nI0506 21:01:24.644731 3184 log.go:172] (0xc000b873f0) Reply frame received for 1\nI0506 21:01:24.644796 3184 log.go:172] (0xc000b873f0) (0xc000832e60) Create stream\nI0506 21:01:24.644831 3184 log.go:172] (0xc000b873f0) (0xc000832e60) Stream added, broadcasting: 3\nI0506 21:01:24.646181 3184 log.go:172] (0xc000b873f0) Reply frame received for 3\nI0506 21:01:24.646210 3184 log.go:172] (0xc000b873f0) (0xc000708be0) Create stream\nI0506 21:01:24.646219 3184 log.go:172] (0xc000b873f0) (0xc000708be0) Stream added, broadcasting: 5\nI0506 21:01:24.647195 3184 log.go:172] (0xc000b873f0) Reply frame received for 5\nI0506 21:01:24.713975 3184 log.go:172] (0xc000b873f0) Data frame received for 5\nI0506 21:01:24.714026 3184 log.go:172] (0xc000708be0) (5) Data frame handling\nI0506 21:01:24.714083 3184 log.go:172] (0xc000708be0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 21:01:24.755364 3184 log.go:172] (0xc000b873f0) Data frame received for 3\nI0506 21:01:24.755391 3184 log.go:172] (0xc000832e60) (3) Data frame handling\nI0506 21:01:24.755399 3184 log.go:172] (0xc000832e60) (3) Data frame sent\nI0506 21:01:24.755520 3184 log.go:172] (0xc000b873f0) Data frame received for 5\nI0506 21:01:24.755547 3184 log.go:172] (0xc000708be0) (5) Data frame handling\nI0506 21:01:24.755567 3184 log.go:172] (0xc000b873f0) Data frame received for 3\nI0506 21:01:24.755579 3184 log.go:172] (0xc000832e60) (3) Data frame handling\nI0506 21:01:24.758102 3184 log.go:172] (0xc000b873f0) Data frame received for 1\nI0506 21:01:24.758138 3184 log.go:172] (0xc000ba61e0) (1) Data frame handling\nI0506 21:01:24.758160 3184 log.go:172] (0xc000ba61e0) (1) Data frame sent\nI0506 21:01:24.758183 3184 log.go:172] (0xc000b873f0) (0xc000ba61e0) Stream removed, broadcasting: 1\nI0506 21:01:24.758213 3184 log.go:172] (0xc000b873f0) Go away received\nI0506 21:01:24.758527 3184 log.go:172] (0xc000b873f0) (0xc000ba61e0) Stream removed, broadcasting: 1\nI0506 21:01:24.758551 3184 log.go:172] (0xc000b873f0) (0xc000832e60) Stream removed, broadcasting: 3\nI0506 21:01:24.758562 3184 log.go:172] (0xc000b873f0) (0xc000708be0) Stream removed, broadcasting: 5\n" May 6 21:01:24.763: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 21:01:24.763: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 21:01:24.764: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5995 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 21:01:25.081: INFO: stderr: "I0506 21:01:24.954305 3204 log.go:172] (0xc000a20000) (0xc0005a6640) Create stream\nI0506 21:01:24.954358 3204 log.go:172] (0xc000a20000) (0xc0005a6640) Stream added, broadcasting: 1\nI0506 21:01:24.956348 3204 log.go:172] (0xc000a20000) Reply frame received for 1\nI0506 21:01:24.956391 3204 log.go:172] (0xc000a20000) (0xc00043ae60) Create stream\nI0506 21:01:24.956403 3204 log.go:172] (0xc000a20000) (0xc00043ae60) Stream added, broadcasting: 3\nI0506 21:01:24.957584 3204 log.go:172] (0xc000a20000) Reply frame received for 3\nI0506 21:01:24.957629 3204 log.go:172] (0xc000a20000) (0xc0004c41e0) Create stream\nI0506 21:01:24.957647 3204 log.go:172] (0xc000a20000) (0xc0004c41e0) Stream added, broadcasting: 5\nI0506 21:01:24.958382 3204 log.go:172] (0xc000a20000) Reply frame received for 5\nI0506 21:01:25.019476 3204 log.go:172] (0xc000a20000) Data frame received for 5\nI0506 21:01:25.019502 3204 log.go:172] (0xc0004c41e0) (5) Data frame handling\nI0506 21:01:25.019518 3204 log.go:172] (0xc0004c41e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 21:01:25.073052 3204 log.go:172] (0xc000a20000) Data frame received for 3\nI0506 21:01:25.073084 3204 log.go:172] (0xc00043ae60) (3) Data frame handling\nI0506 21:01:25.073099 3204 log.go:172] (0xc00043ae60) (3) Data frame sent\nI0506 21:01:25.073631 3204 log.go:172] (0xc000a20000) Data frame received for 5\nI0506 21:01:25.073670 3204 log.go:172] (0xc0004c41e0) (5) Data frame handling\nI0506 21:01:25.073991 3204 log.go:172] (0xc000a20000) Data frame received for 3\nI0506 21:01:25.074020 3204 log.go:172] (0xc00043ae60) (3) Data frame handling\nI0506 21:01:25.075570 3204 log.go:172] (0xc000a20000) Data frame received for 1\nI0506 21:01:25.075586 3204 log.go:172] (0xc0005a6640) (1) Data frame handling\nI0506 21:01:25.075606 3204 log.go:172] (0xc0005a6640) (1) Data frame sent\nI0506 21:01:25.075744 3204 log.go:172] (0xc000a20000) (0xc0005a6640) Stream removed, broadcasting: 1\nI0506 21:01:25.075764 3204 log.go:172] (0xc000a20000) Go away received\nI0506 21:01:25.076109 3204 log.go:172] (0xc000a20000) (0xc0005a6640) Stream removed, broadcasting: 1\nI0506 21:01:25.076128 3204 log.go:172] (0xc000a20000) (0xc00043ae60) Stream removed, broadcasting: 3\nI0506 21:01:25.076139 3204 log.go:172] (0xc000a20000) (0xc0004c41e0) Stream removed, broadcasting: 5\n" May 6 21:01:25.082: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 21:01:25.082: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 21:01:25.082: INFO: Waiting for statefulset status.replicas updated to 0 May 6 21:01:25.085: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 6 21:01:35.093: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 21:01:35.093: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 6 21:01:35.093: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 6 21:01:35.180: INFO: POD NODE PHASE GRACE CONDITIONS May 6 21:01:35.180: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:00:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:00:39 +0000 UTC }] May 6 21:01:35.180: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:03 +0000 UTC }] May 6 21:01:35.180: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:03 +0000 UTC }] May 6 21:01:35.180: INFO: May 6 21:01:35.180: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 21:01:36.608: INFO: POD NODE PHASE GRACE CONDITIONS May 6 21:01:36.608: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:00:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:00:39 +0000 UTC }] May 6 21:01:36.608: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:03 +0000 UTC }] May 6 21:01:36.608: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:03 +0000 UTC }] May 6 21:01:36.608: INFO: May 6 21:01:36.608: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 21:01:37.810: INFO: POD NODE PHASE GRACE CONDITIONS May 6 21:01:37.810: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:00:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:00:39 +0000 UTC }] May 6 21:01:37.810: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:03 +0000 UTC }] May 6 21:01:37.810: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:03 +0000 UTC }] May 6 21:01:37.810: INFO: May 6 21:01:37.810: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 21:01:38.954: INFO: POD NODE PHASE GRACE CONDITIONS May 6 21:01:38.954: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:00:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:00:39 +0000 UTC }] May 6 21:01:38.954: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:03 +0000 UTC }] May 6 21:01:38.954: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:03 +0000 UTC }] May 6 21:01:38.954: INFO: May 6 21:01:38.954: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 21:01:39.958: INFO: POD NODE PHASE GRACE CONDITIONS May 6 21:01:39.958: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:00:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:00:39 +0000 UTC }] May 6 21:01:39.958: INFO: May 6 21:01:39.958: INFO: StatefulSet ss has not reached scale 0, at 1 May 6 21:01:40.963: INFO: POD NODE PHASE GRACE CONDITIONS May 6 21:01:40.963: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:00:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:01:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 21:00:39 +0000 UTC }] May 6 21:01:40.963: INFO: May 6 21:01:40.963: INFO: StatefulSet ss has not reached scale 0, at 1 May 6 21:01:41.982: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.136663339s May 6 21:01:42.985: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.117912817s May 6 21:01:43.988: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.114576219s May 6 21:01:44.991: INFO: Verifying statefulset ss doesn't scale past 0 for another 111.901992ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5995 May 6 21:01:46.012: INFO: Scaling statefulset ss to 0 May 6 21:01:46.021: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 6 21:01:46.100: INFO: Deleting all statefulset in ns statefulset-5995 May 6 21:01:46.103: INFO: Scaling statefulset ss to 0 May 6 21:01:46.281: INFO: Waiting for statefulset status.replicas updated to 0 May 6 21:01:46.284: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:01:46.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5995" for this suite. • [SLOW TEST:67.823 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":288,"completed":212,"skipped":3426,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:01:46.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 6 21:01:47.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-392' May 6 21:01:47.308: INFO: stderr: "" May 6 21:01:47.308: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 May 6 21:01:47.341: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-392' May 6 21:01:55.228: INFO: stderr: "" May 6 21:01:55.228: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:01:55.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-392" for this suite. • [SLOW TEST:8.828 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":288,"completed":213,"skipped":3438,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:01:55.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 21:01:55.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 6 21:01:55.990: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-06T21:01:55Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-06T21:01:55Z]] name:name1 resourceVersion:2101045 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:167e00bc-8f0b-47ba-9a41-7b98940c97ea] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 6 21:02:06.116: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-06T21:02:05Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-06T21:02:05Z]] name:name2 resourceVersion:2101084 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:5539e473-cc20-44fd-8452-04d237632aa0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 6 21:02:16.122: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-06T21:01:55Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-06T21:02:16Z]] name:name1 resourceVersion:2101110 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:167e00bc-8f0b-47ba-9a41-7b98940c97ea] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 6 21:02:26.131: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-06T21:02:05Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-06T21:02:26Z]] name:name2 resourceVersion:2101141 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:5539e473-cc20-44fd-8452-04d237632aa0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 6 21:02:36.139: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-06T21:01:55Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-06T21:02:16Z]] name:name1 resourceVersion:2101166 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:167e00bc-8f0b-47ba-9a41-7b98940c97ea] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 6 21:02:46.147: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-06T21:02:05Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-06T21:02:26Z]] name:name2 resourceVersion:2101196 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:5539e473-cc20-44fd-8452-04d237632aa0] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:02:56.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-9022" for this suite. • [SLOW TEST:61.415 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":288,"completed":214,"skipped":3444,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:02:56.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 6 21:02:56.734: INFO: Waiting up to 5m0s for pod "pod-a919adf5-f87a-4874-9e70-16d4c3186ca6" in namespace "emptydir-2102" to be "Succeeded or Failed" May 6 21:02:56.779: INFO: Pod "pod-a919adf5-f87a-4874-9e70-16d4c3186ca6": Phase="Pending", Reason="", readiness=false. Elapsed: 44.673803ms May 6 21:02:58.784: INFO: Pod "pod-a919adf5-f87a-4874-9e70-16d4c3186ca6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049363743s May 6 21:03:00.787: INFO: Pod "pod-a919adf5-f87a-4874-9e70-16d4c3186ca6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053014738s STEP: Saw pod success May 6 21:03:00.787: INFO: Pod "pod-a919adf5-f87a-4874-9e70-16d4c3186ca6" satisfied condition "Succeeded or Failed" May 6 21:03:00.790: INFO: Trying to get logs from node latest-worker pod pod-a919adf5-f87a-4874-9e70-16d4c3186ca6 container test-container: STEP: delete the pod May 6 21:03:00.823: INFO: Waiting for pod pod-a919adf5-f87a-4874-9e70-16d4c3186ca6 to disappear May 6 21:03:00.852: INFO: Pod pod-a919adf5-f87a-4874-9e70-16d4c3186ca6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:03:00.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2102" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":215,"skipped":3485,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:03:00.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-f7a5d7bf-a241-48ea-8ab7-90b7f758e78a STEP: Creating a pod to test consume configMaps May 6 21:03:01.564: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c66d9efd-a2ba-4a6e-bbcb-92e9076c7e85" in namespace "projected-829" to be "Succeeded or Failed" May 6 21:03:01.566: INFO: Pod "pod-projected-configmaps-c66d9efd-a2ba-4a6e-bbcb-92e9076c7e85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318683ms May 6 21:03:03.671: INFO: Pod "pod-projected-configmaps-c66d9efd-a2ba-4a6e-bbcb-92e9076c7e85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107778746s May 6 21:03:05.675: INFO: Pod "pod-projected-configmaps-c66d9efd-a2ba-4a6e-bbcb-92e9076c7e85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111352959s May 6 21:03:07.679: INFO: Pod "pod-projected-configmaps-c66d9efd-a2ba-4a6e-bbcb-92e9076c7e85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.115494259s STEP: Saw pod success May 6 21:03:07.679: INFO: Pod "pod-projected-configmaps-c66d9efd-a2ba-4a6e-bbcb-92e9076c7e85" satisfied condition "Succeeded or Failed" May 6 21:03:07.682: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-c66d9efd-a2ba-4a6e-bbcb-92e9076c7e85 container projected-configmap-volume-test: STEP: delete the pod May 6 21:03:07.697: INFO: Waiting for pod pod-projected-configmaps-c66d9efd-a2ba-4a6e-bbcb-92e9076c7e85 to disappear May 6 21:03:07.727: INFO: Pod pod-projected-configmaps-c66d9efd-a2ba-4a6e-bbcb-92e9076c7e85 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:03:07.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-829" for this suite. • [SLOW TEST:6.877 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":216,"skipped":3530,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:03:07.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-7ae58e41-406b-401f-8c68-32c4fa2f0a14 STEP: Creating a pod to test consume configMaps May 6 21:03:07.835: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0e70e091-918f-4bdb-9291-e38d98d4d589" in namespace "projected-5328" to be "Succeeded or Failed" May 6 21:03:07.882: INFO: Pod "pod-projected-configmaps-0e70e091-918f-4bdb-9291-e38d98d4d589": Phase="Pending", Reason="", readiness=false. Elapsed: 46.95747ms May 6 21:03:09.886: INFO: Pod "pod-projected-configmaps-0e70e091-918f-4bdb-9291-e38d98d4d589": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051020703s May 6 21:03:12.139: INFO: Pod "pod-projected-configmaps-0e70e091-918f-4bdb-9291-e38d98d4d589": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30364368s May 6 21:03:14.151: INFO: Pod "pod-projected-configmaps-0e70e091-918f-4bdb-9291-e38d98d4d589": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.316028155s STEP: Saw pod success May 6 21:03:14.151: INFO: Pod "pod-projected-configmaps-0e70e091-918f-4bdb-9291-e38d98d4d589" satisfied condition "Succeeded or Failed" May 6 21:03:14.170: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-0e70e091-918f-4bdb-9291-e38d98d4d589 container projected-configmap-volume-test: STEP: delete the pod May 6 21:03:14.250: INFO: Waiting for pod pod-projected-configmaps-0e70e091-918f-4bdb-9291-e38d98d4d589 to disappear May 6 21:03:14.278: INFO: Pod pod-projected-configmaps-0e70e091-918f-4bdb-9291-e38d98d4d589 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:03:14.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5328" for this suite. • [SLOW TEST:6.804 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":217,"skipped":3558,"failed":0} S ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:03:14.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 6 21:03:14.664: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0548edba-bf6f-4f19-b43b-f5a1d7f8d5e8" in namespace "downward-api-3748" to be "Succeeded or Failed" May 6 21:03:14.691: INFO: Pod "downwardapi-volume-0548edba-bf6f-4f19-b43b-f5a1d7f8d5e8": Phase="Pending", Reason="", readiness=false. Elapsed: 26.99153ms May 6 21:03:16.694: INFO: Pod "downwardapi-volume-0548edba-bf6f-4f19-b43b-f5a1d7f8d5e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030300929s May 6 21:03:18.698: INFO: Pod "downwardapi-volume-0548edba-bf6f-4f19-b43b-f5a1d7f8d5e8": Phase="Running", Reason="", readiness=true. Elapsed: 4.034362081s May 6 21:03:20.702: INFO: Pod "downwardapi-volume-0548edba-bf6f-4f19-b43b-f5a1d7f8d5e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037723247s STEP: Saw pod success May 6 21:03:20.702: INFO: Pod "downwardapi-volume-0548edba-bf6f-4f19-b43b-f5a1d7f8d5e8" satisfied condition "Succeeded or Failed" May 6 21:03:20.704: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-0548edba-bf6f-4f19-b43b-f5a1d7f8d5e8 container client-container: STEP: delete the pod May 6 21:03:20.793: INFO: Waiting for pod downwardapi-volume-0548edba-bf6f-4f19-b43b-f5a1d7f8d5e8 to disappear May 6 21:03:20.887: INFO: Pod downwardapi-volume-0548edba-bf6f-4f19-b43b-f5a1d7f8d5e8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:03:20.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3748" for this suite. • [SLOW TEST:6.367 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":218,"skipped":3559,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:03:20.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 6 21:03:21.238: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e7545f8e-64c5-47d4-924f-9ea789084836" in namespace "downward-api-5855" to be "Succeeded or Failed" May 6 21:03:21.355: INFO: Pod "downwardapi-volume-e7545f8e-64c5-47d4-924f-9ea789084836": Phase="Pending", Reason="", readiness=false. Elapsed: 116.88584ms May 6 21:03:23.367: INFO: Pod "downwardapi-volume-e7545f8e-64c5-47d4-924f-9ea789084836": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128590059s May 6 21:03:25.481: INFO: Pod "downwardapi-volume-e7545f8e-64c5-47d4-924f-9ea789084836": Phase="Running", Reason="", readiness=true. Elapsed: 4.242478226s May 6 21:03:27.485: INFO: Pod "downwardapi-volume-e7545f8e-64c5-47d4-924f-9ea789084836": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.246148322s STEP: Saw pod success May 6 21:03:27.485: INFO: Pod "downwardapi-volume-e7545f8e-64c5-47d4-924f-9ea789084836" satisfied condition "Succeeded or Failed" May 6 21:03:27.488: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-e7545f8e-64c5-47d4-924f-9ea789084836 container client-container: STEP: delete the pod May 6 21:03:28.272: INFO: Waiting for pod downwardapi-volume-e7545f8e-64c5-47d4-924f-9ea789084836 to disappear May 6 21:03:28.523: INFO: Pod downwardapi-volume-e7545f8e-64c5-47d4-924f-9ea789084836 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:03:28.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5855" for this suite. • [SLOW TEST:7.625 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":219,"skipped":3568,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:03:28.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 21:03:28.902: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 6 21:03:28.912: INFO: Number of nodes with available pods: 0 May 6 21:03:28.912: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 6 21:03:29.008: INFO: Number of nodes with available pods: 0 May 6 21:03:29.008: INFO: Node latest-worker is running more than one daemon pod May 6 21:03:30.019: INFO: Number of nodes with available pods: 0 May 6 21:03:30.019: INFO: Node latest-worker is running more than one daemon pod May 6 21:03:31.012: INFO: Number of nodes with available pods: 0 May 6 21:03:31.012: INFO: Node latest-worker is running more than one daemon pod May 6 21:03:32.012: INFO: Number of nodes with available pods: 0 May 6 21:03:32.012: INFO: Node latest-worker is running more than one daemon pod May 6 21:03:33.237: INFO: Number of nodes with available pods: 1 May 6 21:03:33.237: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 6 21:03:33.484: INFO: Number of nodes with available pods: 1 May 6 21:03:33.485: INFO: Number of running nodes: 0, number of available pods: 1 May 6 21:03:34.489: INFO: Number of nodes with available pods: 0 May 6 21:03:34.489: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 6 21:03:34.690: INFO: Number of nodes with available pods: 0 May 6 21:03:34.690: INFO: Node latest-worker is running more than one daemon pod May 6 21:03:35.695: INFO: Number of nodes with available pods: 0 May 6 21:03:35.695: INFO: Node latest-worker is running more than one daemon pod May 6 21:03:36.695: INFO: Number of nodes with available pods: 0 May 6 21:03:36.695: INFO: Node latest-worker is running more than one daemon pod May 6 21:03:37.694: INFO: Number of nodes with available pods: 0 May 6 21:03:37.694: INFO: Node latest-worker is running more than one daemon pod May 6 21:03:38.695: INFO: Number of nodes with available pods: 0 May 6 21:03:38.695: INFO: Node latest-worker is running more than one daemon pod May 6 21:03:39.695: INFO: Number of nodes with available pods: 0 May 6 21:03:39.695: INFO: Node latest-worker is running more than one daemon pod May 6 21:03:40.695: INFO: Number of nodes with available pods: 0 May 6 21:03:40.695: INFO: Node latest-worker is running more than one daemon pod May 6 21:03:41.695: INFO: Number of nodes with available pods: 0 May 6 21:03:41.695: INFO: Node latest-worker is running more than one daemon pod May 6 21:03:42.694: INFO: Number of nodes with available pods: 0 May 6 21:03:42.694: INFO: Node latest-worker is running more than one daemon pod May 6 21:03:43.872: INFO: Number of nodes with available pods: 0 May 6 21:03:43.872: INFO: Node latest-worker is running more than one daemon pod May 6 21:03:44.695: INFO: Number of nodes with available pods: 0 May 6 21:03:44.695: INFO: Node latest-worker is running more than one daemon pod May 6 21:03:45.695: INFO: Number of nodes with available pods: 0 May 6 21:03:45.695: INFO: Node latest-worker is running more than one daemon pod May 6 21:03:46.695: INFO: Number of nodes with available pods: 0 May 6 21:03:46.695: INFO: Node latest-worker is running more than one daemon pod May 6 21:03:47.695: INFO: Number of nodes with available pods: 0 May 6 21:03:47.695: INFO: Node latest-worker is running more than one daemon pod May 6 21:03:48.694: INFO: Number of nodes with available pods: 1 May 6 21:03:48.694: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7766, will wait for the garbage collector to delete the pods May 6 21:03:48.758: INFO: Deleting DaemonSet.extensions daemon-set took: 6.840488ms May 6 21:03:49.058: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.292324ms May 6 21:03:54.881: INFO: Number of nodes with available pods: 0 May 6 21:03:54.881: INFO: Number of running nodes: 0, number of available pods: 0 May 6 21:03:54.884: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7766/daemonsets","resourceVersion":"2101576"},"items":null} May 6 21:03:54.886: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7766/pods","resourceVersion":"2101576"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:03:54.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7766" for this suite. • [SLOW TEST:26.409 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":288,"completed":220,"skipped":3575,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:03:54.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:04:00.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7312" for this suite. • [SLOW TEST:5.453 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":288,"completed":221,"skipped":3625,"failed":0} SS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:04:00.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-e0f78ef9-06a9-4fb4-953d-6ca69d875e08 in namespace container-probe-2271 May 6 21:04:06.696: INFO: Started pod liveness-e0f78ef9-06a9-4fb4-953d-6ca69d875e08 in namespace container-probe-2271 STEP: checking the pod's current state and verifying that restartCount is present May 6 21:04:06.699: INFO: Initial restart count of pod liveness-e0f78ef9-06a9-4fb4-953d-6ca69d875e08 is 0 May 6 21:04:28.759: INFO: Restart count of pod container-probe-2271/liveness-e0f78ef9-06a9-4fb4-953d-6ca69d875e08 is now 1 (22.059711734s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:04:28.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2271" for this suite. • [SLOW TEST:28.513 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":222,"skipped":3627,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:04:28.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311 STEP: creating the pod May 6 21:04:29.592: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4277' May 6 21:04:30.592: INFO: stderr: "" May 6 21:04:30.592: INFO: stdout: "pod/pause created\n" May 6 21:04:30.592: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 6 21:04:30.593: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4277" to be "running and ready" May 6 21:04:30.675: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 82.259125ms May 6 21:04:32.678: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085689032s May 6 21:04:34.682: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.089884988s May 6 21:04:34.683: INFO: Pod "pause" satisfied condition "running and ready" May 6 21:04:34.683: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod May 6 21:04:34.683: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4277' May 6 21:04:34.801: INFO: stderr: "" May 6 21:04:34.801: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 6 21:04:34.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4277' May 6 21:04:34.901: INFO: stderr: "" May 6 21:04:34.901: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 6 21:04:34.901: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4277' May 6 21:04:34.994: INFO: stderr: "" May 6 21:04:34.994: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 6 21:04:34.994: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4277' May 6 21:04:35.127: INFO: stderr: "" May 6 21:04:35.127: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: using delete to clean up resources May 6 21:04:35.128: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4277' May 6 21:04:35.314: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 21:04:35.314: INFO: stdout: "pod \"pause\" force deleted\n" May 6 21:04:35.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4277' May 6 21:04:35.424: INFO: stderr: "No resources found in kubectl-4277 namespace.\n" May 6 21:04:35.424: INFO: stdout: "" May 6 21:04:35.424: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4277 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 21:04:35.527: INFO: stderr: "" May 6 21:04:35.527: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:04:35.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4277" for this suite. • [SLOW TEST:6.626 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":288,"completed":223,"skipped":3649,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:04:35.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-d6cbfea4-6f89-4252-bd04-dd65c599c26f STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-d6cbfea4-6f89-4252-bd04-dd65c599c26f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:04:42.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3422" for this suite. • [SLOW TEST:6.704 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":224,"skipped":3652,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:04:42.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:04:54.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3623" for this suite. • [SLOW TEST:12.640 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":288,"completed":225,"skipped":3680,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:04:54.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:05:08.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4840" for this suite. • [SLOW TEST:13.494 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":288,"completed":226,"skipped":3680,"failed":0} SS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:05:08.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-cf2edbb4-1369-4a17-a49a-7585e05f89d2 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:05:08.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1763" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":288,"completed":227,"skipped":3682,"failed":0} ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:05:08.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command May 6 21:05:08.795: INFO: Waiting up to 5m0s for pod "client-containers-ad55af9d-e482-41d1-97ef-ec73be008930" in namespace "containers-6607" to be "Succeeded or Failed" May 6 21:05:09.085: INFO: Pod "client-containers-ad55af9d-e482-41d1-97ef-ec73be008930": Phase="Pending", Reason="", readiness=false. Elapsed: 289.840856ms May 6 21:05:11.098: INFO: Pod "client-containers-ad55af9d-e482-41d1-97ef-ec73be008930": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302922164s May 6 21:05:13.207: INFO: Pod "client-containers-ad55af9d-e482-41d1-97ef-ec73be008930": Phase="Running", Reason="", readiness=true. Elapsed: 4.411394415s May 6 21:05:15.210: INFO: Pod "client-containers-ad55af9d-e482-41d1-97ef-ec73be008930": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.415177036s STEP: Saw pod success May 6 21:05:15.210: INFO: Pod "client-containers-ad55af9d-e482-41d1-97ef-ec73be008930" satisfied condition "Succeeded or Failed" May 6 21:05:15.213: INFO: Trying to get logs from node latest-worker pod client-containers-ad55af9d-e482-41d1-97ef-ec73be008930 container test-container: STEP: delete the pod May 6 21:05:15.423: INFO: Waiting for pod client-containers-ad55af9d-e482-41d1-97ef-ec73be008930 to disappear May 6 21:05:15.438: INFO: Pod client-containers-ad55af9d-e482-41d1-97ef-ec73be008930 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:05:15.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6607" for this suite. • [SLOW TEST:6.900 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":288,"completed":228,"skipped":3682,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:05:15.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 6 21:05:19.812: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:05:19.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7080" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":288,"completed":229,"skipped":3690,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:05:19.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 6 21:05:24.658: INFO: Successfully updated pod "pod-update-9a6b1ce1-e167-4a48-94e8-0b5ca8895eda" STEP: verifying the updated pod is in kubernetes May 6 21:05:24.724: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:05:24.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3258" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":288,"completed":230,"skipped":3703,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:05:24.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 21:05:25.285: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 21:05:27.297: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395925, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395925, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395925, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395925, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 21:05:29.301: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395925, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395925, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395925, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395925, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 21:05:32.331: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 6 21:05:36.580: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-2947 to-be-attached-pod -i -c=container1' May 6 21:05:36.699: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:05:36.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2947" for this suite. STEP: Destroying namespace "webhook-2947-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.114 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":288,"completed":231,"skipped":3705,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:05:36.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 6 21:05:36.954: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 21:05:36.964: INFO: Waiting for terminating namespaces to be deleted... May 6 21:05:36.966: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 6 21:05:36.971: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 6 21:05:36.971: INFO: Container kindnet-cni ready: true, restart count 0 May 6 21:05:36.971: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 6 21:05:36.971: INFO: Container kube-proxy ready: true, restart count 0 May 6 21:05:36.971: INFO: sample-webhook-deployment-75dd644756-vjvl8 from webhook-2947 started at 2020-05-06 21:05:25 +0000 UTC (1 container statuses recorded) May 6 21:05:36.971: INFO: Container sample-webhook ready: true, restart count 0 May 6 21:05:36.971: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 6 21:05:36.975: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 6 21:05:36.975: INFO: Container kindnet-cni ready: true, restart count 0 May 6 21:05:36.975: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 6 21:05:36.975: INFO: Container kube-proxy ready: true, restart count 0 May 6 21:05:36.975: INFO: to-be-attached-pod from webhook-2947 started at 2020-05-06 21:05:32 +0000 UTC (1 container statuses recorded) May 6 21:05:36.975: INFO: Container container1 ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160c8c65b6a97449], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.160c8c65bac681bb], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:05:37.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2094" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":288,"completed":232,"skipped":3716,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:05:38.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 6 21:05:38.864: INFO: PodSpec: initContainers in spec.initContainers May 6 21:06:38.009: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b9d0e7ca-09ee-4871-a572-d9d3c7e21eae", GenerateName:"", Namespace:"init-container-2722", SelfLink:"/api/v1/namespaces/init-container-2722/pods/pod-init-b9d0e7ca-09ee-4871-a572-d9d3c7e21eae", UID:"f94821fb-a766-4982-809a-b9a476ad3dba", ResourceVersion:"2102500", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724395938, loc:(*time.Location)(0x7c2f200)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"864351456"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0024f00c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0024f0100)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0024f0140), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0024f0180)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-wm8cn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0069fe000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wm8cn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wm8cn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wm8cn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0024ca1a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00252e000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024ca2c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024ca2f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0024ca2f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0024ca2fc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395939, loc:(*time.Location)(0x7c2f200)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395939, loc:(*time.Location)(0x7c2f200)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395939, loc:(*time.Location)(0x7c2f200)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724395938, loc:(*time.Location)(0x7c2f200)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.1.196", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.196"}}, StartTime:(*v1.Time)(0xc0024f01c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0024f0240), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00252e0e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://0d197cca7a4c64bf94e2df13439d6803b0cc1ea0da9c7a5304ff2005e2bf683c", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0024f0280), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0024f0200), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0024ca3ef)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:06:38.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2722" for this suite. • [SLOW TEST:60.220 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":288,"completed":233,"skipped":3755,"failed":0} SSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:06:38.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-5809 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5809 to expose endpoints map[] May 6 21:06:38.535: INFO: Get endpoints failed (34.344463ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 6 21:06:39.542: INFO: successfully validated that service endpoint-test2 in namespace services-5809 exposes endpoints map[] (1.041449891s elapsed) STEP: Creating pod pod1 in namespace services-5809 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5809 to expose endpoints map[pod1:[80]] May 6 21:06:44.225: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.678514215s elapsed, will retry) May 6 21:06:46.793: INFO: successfully validated that service endpoint-test2 in namespace services-5809 exposes endpoints map[pod1:[80]] (7.245835802s elapsed) STEP: Creating pod pod2 in namespace services-5809 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5809 to expose endpoints map[pod1:[80] pod2:[80]] May 6 21:06:51.408: INFO: successfully validated that service endpoint-test2 in namespace services-5809 exposes endpoints map[pod1:[80] pod2:[80]] (4.6109884s elapsed) STEP: Deleting pod pod1 in namespace services-5809 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5809 to expose endpoints map[pod2:[80]] May 6 21:06:52.857: INFO: successfully validated that service endpoint-test2 in namespace services-5809 exposes endpoints map[pod2:[80]] (1.443469281s elapsed) STEP: Deleting pod pod2 in namespace services-5809 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5809 to expose endpoints map[] May 6 21:06:54.145: INFO: successfully validated that service endpoint-test2 in namespace services-5809 exposes endpoints map[] (1.284231595s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:06:55.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5809" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:17.574 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":288,"completed":234,"skipped":3759,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:06:55.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-54f3c0c4-26e6-466e-a802-c865533be3fd STEP: Creating a pod to test consume secrets May 6 21:06:56.426: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d40a888f-b12d-4281-aadc-09bc9c8cab56" in namespace "projected-3371" to be "Succeeded or Failed" May 6 21:06:56.644: INFO: Pod "pod-projected-secrets-d40a888f-b12d-4281-aadc-09bc9c8cab56": Phase="Pending", Reason="", readiness=false. Elapsed: 217.669618ms May 6 21:06:58.663: INFO: Pod "pod-projected-secrets-d40a888f-b12d-4281-aadc-09bc9c8cab56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23673997s May 6 21:07:00.670: INFO: Pod "pod-projected-secrets-d40a888f-b12d-4281-aadc-09bc9c8cab56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243865947s May 6 21:07:02.673: INFO: Pod "pod-projected-secrets-d40a888f-b12d-4281-aadc-09bc9c8cab56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.247548122s STEP: Saw pod success May 6 21:07:02.673: INFO: Pod "pod-projected-secrets-d40a888f-b12d-4281-aadc-09bc9c8cab56" satisfied condition "Succeeded or Failed" May 6 21:07:02.676: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-d40a888f-b12d-4281-aadc-09bc9c8cab56 container projected-secret-volume-test: STEP: delete the pod May 6 21:07:02.903: INFO: Waiting for pod pod-projected-secrets-d40a888f-b12d-4281-aadc-09bc9c8cab56 to disappear May 6 21:07:02.927: INFO: Pod pod-projected-secrets-d40a888f-b12d-4281-aadc-09bc9c8cab56 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:07:02.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3371" for this suite. • [SLOW TEST:7.136 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":235,"skipped":3759,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:07:02.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:07:03.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9018" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":288,"completed":236,"skipped":3761,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:07:03.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-dab8a3ea-469c-43e4-8088-cc6388e29d8e May 6 21:07:03.854: INFO: Pod name my-hostname-basic-dab8a3ea-469c-43e4-8088-cc6388e29d8e: Found 0 pods out of 1 May 6 21:07:08.932: INFO: Pod name my-hostname-basic-dab8a3ea-469c-43e4-8088-cc6388e29d8e: Found 1 pods out of 1 May 6 21:07:08.932: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-dab8a3ea-469c-43e4-8088-cc6388e29d8e" are running May 6 21:07:10.939: INFO: Pod "my-hostname-basic-dab8a3ea-469c-43e4-8088-cc6388e29d8e-9srm4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 21:07:04 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 21:07:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-dab8a3ea-469c-43e4-8088-cc6388e29d8e]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 21:07:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-dab8a3ea-469c-43e4-8088-cc6388e29d8e]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 21:07:03 +0000 UTC Reason: Message:}]) May 6 21:07:10.939: INFO: Trying to dial the pod May 6 21:07:16.028: INFO: Controller my-hostname-basic-dab8a3ea-469c-43e4-8088-cc6388e29d8e: Got expected result from replica 1 [my-hostname-basic-dab8a3ea-469c-43e4-8088-cc6388e29d8e-9srm4]: "my-hostname-basic-dab8a3ea-469c-43e4-8088-cc6388e29d8e-9srm4", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:07:16.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6078" for this suite. • [SLOW TEST:12.638 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":237,"skipped":3781,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:07:16.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5911 May 6 21:07:22.610: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5911 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 6 21:07:22.854: INFO: stderr: "I0506 21:07:22.758677 3448 log.go:172] (0xc000af91e0) (0xc000812f00) Create stream\nI0506 21:07:22.758743 3448 log.go:172] (0xc000af91e0) (0xc000812f00) Stream added, broadcasting: 1\nI0506 21:07:22.762665 3448 log.go:172] (0xc000af91e0) Reply frame received for 1\nI0506 21:07:22.762695 3448 log.go:172] (0xc000af91e0) (0xc000707540) Create stream\nI0506 21:07:22.762703 3448 log.go:172] (0xc000af91e0) (0xc000707540) Stream added, broadcasting: 3\nI0506 21:07:22.763407 3448 log.go:172] (0xc000af91e0) Reply frame received for 3\nI0506 21:07:22.763429 3448 log.go:172] (0xc000af91e0) (0xc0006bcd20) Create stream\nI0506 21:07:22.763437 3448 log.go:172] (0xc000af91e0) (0xc0006bcd20) Stream added, broadcasting: 5\nI0506 21:07:22.764117 3448 log.go:172] (0xc000af91e0) Reply frame received for 5\nI0506 21:07:22.845077 3448 log.go:172] (0xc000af91e0) Data frame received for 5\nI0506 21:07:22.845105 3448 log.go:172] (0xc0006bcd20) (5) Data frame handling\nI0506 21:07:22.845305 3448 log.go:172] (0xc0006bcd20) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0506 21:07:22.847587 3448 log.go:172] (0xc000af91e0) Data frame received for 3\nI0506 21:07:22.847601 3448 log.go:172] (0xc000707540) (3) Data frame handling\nI0506 21:07:22.847612 3448 log.go:172] (0xc000707540) (3) Data frame sent\nI0506 21:07:22.848280 3448 log.go:172] (0xc000af91e0) Data frame received for 5\nI0506 21:07:22.848296 3448 log.go:172] (0xc0006bcd20) (5) Data frame handling\nI0506 21:07:22.848544 3448 log.go:172] (0xc000af91e0) Data frame received for 3\nI0506 21:07:22.848554 3448 log.go:172] (0xc000707540) (3) Data frame handling\nI0506 21:07:22.850182 3448 log.go:172] (0xc000af91e0) Data frame received for 1\nI0506 21:07:22.850204 3448 log.go:172] (0xc000812f00) (1) Data frame handling\nI0506 21:07:22.850224 3448 log.go:172] (0xc000812f00) (1) Data frame sent\nI0506 21:07:22.850239 3448 log.go:172] (0xc000af91e0) (0xc000812f00) Stream removed, broadcasting: 1\nI0506 21:07:22.850260 3448 log.go:172] (0xc000af91e0) Go away received\nI0506 21:07:22.850516 3448 log.go:172] (0xc000af91e0) (0xc000812f00) Stream removed, broadcasting: 1\nI0506 21:07:22.850527 3448 log.go:172] (0xc000af91e0) (0xc000707540) Stream removed, broadcasting: 3\nI0506 21:07:22.850532 3448 log.go:172] (0xc000af91e0) (0xc0006bcd20) Stream removed, broadcasting: 5\n" May 6 21:07:22.854: INFO: stdout: "iptables" May 6 21:07:22.854: INFO: proxyMode: iptables May 6 21:07:22.859: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 6 21:07:22.883: INFO: Pod kube-proxy-mode-detector still exists May 6 21:07:24.884: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 6 21:07:24.887: INFO: Pod kube-proxy-mode-detector still exists May 6 21:07:26.884: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 6 21:07:26.888: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-5911 STEP: creating replication controller affinity-clusterip-timeout in namespace services-5911 I0506 21:07:26.940798 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-5911, replica count: 3 I0506 21:07:29.991234 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 21:07:32.991459 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 21:07:35.991659 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 21:07:36.142: INFO: Creating new exec pod May 6 21:07:45.248: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5911 execpod-affinity9gdgk -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 6 21:07:45.457: INFO: stderr: "I0506 21:07:45.387858 3468 log.go:172] (0xc0009b3130) (0xc000a72140) Create stream\nI0506 21:07:45.388240 3468 log.go:172] (0xc0009b3130) (0xc000a72140) Stream added, broadcasting: 1\nI0506 21:07:45.392369 3468 log.go:172] (0xc0009b3130) Reply frame received for 1\nI0506 21:07:45.392414 3468 log.go:172] (0xc0009b3130) (0xc000508e60) Create stream\nI0506 21:07:45.392424 3468 log.go:172] (0xc0009b3130) (0xc000508e60) Stream added, broadcasting: 3\nI0506 21:07:45.393847 3468 log.go:172] (0xc0009b3130) Reply frame received for 3\nI0506 21:07:45.393931 3468 log.go:172] (0xc0009b3130) (0xc000356140) Create stream\nI0506 21:07:45.393958 3468 log.go:172] (0xc0009b3130) (0xc000356140) Stream added, broadcasting: 5\nI0506 21:07:45.395342 3468 log.go:172] (0xc0009b3130) Reply frame received for 5\nI0506 21:07:45.448422 3468 log.go:172] (0xc0009b3130) Data frame received for 5\nI0506 21:07:45.448465 3468 log.go:172] (0xc000356140) (5) Data frame handling\nI0506 21:07:45.448507 3468 log.go:172] (0xc000356140) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0506 21:07:45.449487 3468 log.go:172] (0xc0009b3130) Data frame received for 5\nI0506 21:07:45.449512 3468 log.go:172] (0xc000356140) (5) Data frame handling\nI0506 21:07:45.449539 3468 log.go:172] (0xc000356140) (5) Data frame sent\nI0506 21:07:45.449581 3468 log.go:172] (0xc0009b3130) Data frame received for 5\nI0506 21:07:45.449600 3468 log.go:172] (0xc000356140) (5) Data frame handling\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0506 21:07:45.449948 3468 log.go:172] (0xc0009b3130) Data frame received for 3\nI0506 21:07:45.449988 3468 log.go:172] (0xc000508e60) (3) Data frame handling\nI0506 21:07:45.451846 3468 log.go:172] (0xc0009b3130) Data frame received for 1\nI0506 21:07:45.451871 3468 log.go:172] (0xc000a72140) (1) Data frame handling\nI0506 21:07:45.451886 3468 log.go:172] (0xc000a72140) (1) Data frame sent\nI0506 21:07:45.451921 3468 log.go:172] (0xc0009b3130) (0xc000a72140) Stream removed, broadcasting: 1\nI0506 21:07:45.451960 3468 log.go:172] (0xc0009b3130) Go away received\nI0506 21:07:45.452278 3468 log.go:172] (0xc0009b3130) (0xc000a72140) Stream removed, broadcasting: 1\nI0506 21:07:45.452297 3468 log.go:172] (0xc0009b3130) (0xc000508e60) Stream removed, broadcasting: 3\nI0506 21:07:45.452307 3468 log.go:172] (0xc0009b3130) (0xc000356140) Stream removed, broadcasting: 5\n" May 6 21:07:45.457: INFO: stdout: "" May 6 21:07:45.458: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5911 execpod-affinity9gdgk -- /bin/sh -x -c nc -zv -t -w 2 10.104.214.197 80' May 6 21:07:45.889: INFO: stderr: "I0506 21:07:45.820455 3488 log.go:172] (0xc00041dad0) (0xc0006bbcc0) Create stream\nI0506 21:07:45.820522 3488 log.go:172] (0xc00041dad0) (0xc0006bbcc0) Stream added, broadcasting: 1\nI0506 21:07:45.823076 3488 log.go:172] (0xc00041dad0) Reply frame received for 1\nI0506 21:07:45.823137 3488 log.go:172] (0xc00041dad0) (0xc00068a5a0) Create stream\nI0506 21:07:45.823156 3488 log.go:172] (0xc00041dad0) (0xc00068a5a0) Stream added, broadcasting: 3\nI0506 21:07:45.823958 3488 log.go:172] (0xc00041dad0) Reply frame received for 3\nI0506 21:07:45.823996 3488 log.go:172] (0xc00041dad0) (0xc0006a85a0) Create stream\nI0506 21:07:45.824011 3488 log.go:172] (0xc00041dad0) (0xc0006a85a0) Stream added, broadcasting: 5\nI0506 21:07:45.824759 3488 log.go:172] (0xc00041dad0) Reply frame received for 5\nI0506 21:07:45.882369 3488 log.go:172] (0xc00041dad0) Data frame received for 3\nI0506 21:07:45.882403 3488 log.go:172] (0xc00068a5a0) (3) Data frame handling\nI0506 21:07:45.882442 3488 log.go:172] (0xc00041dad0) Data frame received for 5\nI0506 21:07:45.882490 3488 log.go:172] (0xc0006a85a0) (5) Data frame handling\nI0506 21:07:45.882514 3488 log.go:172] (0xc0006a85a0) (5) Data frame sent\nI0506 21:07:45.882527 3488 log.go:172] (0xc00041dad0) Data frame received for 5\nI0506 21:07:45.882540 3488 log.go:172] (0xc0006a85a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.214.197 80\nConnection to 10.104.214.197 80 port [tcp/http] succeeded!\nI0506 21:07:45.884133 3488 log.go:172] (0xc00041dad0) Data frame received for 1\nI0506 21:07:45.884159 3488 log.go:172] (0xc0006bbcc0) (1) Data frame handling\nI0506 21:07:45.884170 3488 log.go:172] (0xc0006bbcc0) (1) Data frame sent\nI0506 21:07:45.884187 3488 log.go:172] (0xc00041dad0) (0xc0006bbcc0) Stream removed, broadcasting: 1\nI0506 21:07:45.884214 3488 log.go:172] (0xc00041dad0) Go away received\nI0506 21:07:45.884701 3488 log.go:172] (0xc00041dad0) (0xc0006bbcc0) Stream removed, broadcasting: 1\nI0506 21:07:45.884738 3488 log.go:172] (0xc00041dad0) (0xc00068a5a0) Stream removed, broadcasting: 3\nI0506 21:07:45.884759 3488 log.go:172] (0xc00041dad0) (0xc0006a85a0) Stream removed, broadcasting: 5\n" May 6 21:07:45.889: INFO: stdout: "" May 6 21:07:45.889: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5911 execpod-affinity9gdgk -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.104.214.197:80/ ; done' May 6 21:07:46.462: INFO: stderr: "I0506 21:07:46.313056 3510 log.go:172] (0xc0000ecb00) (0xc0001b8280) Create stream\nI0506 21:07:46.313272 3510 log.go:172] (0xc0000ecb00) (0xc0001b8280) Stream added, broadcasting: 1\nI0506 21:07:46.316038 3510 log.go:172] (0xc0000ecb00) Reply frame received for 1\nI0506 21:07:46.316068 3510 log.go:172] (0xc0000ecb00) (0xc00017ae60) Create stream\nI0506 21:07:46.316075 3510 log.go:172] (0xc0000ecb00) (0xc00017ae60) Stream added, broadcasting: 3\nI0506 21:07:46.317022 3510 log.go:172] (0xc0000ecb00) Reply frame received for 3\nI0506 21:07:46.317081 3510 log.go:172] (0xc0000ecb00) (0xc0001b8a00) Create stream\nI0506 21:07:46.317099 3510 log.go:172] (0xc0000ecb00) (0xc0001b8a00) Stream added, broadcasting: 5\nI0506 21:07:46.318111 3510 log.go:172] (0xc0000ecb00) Reply frame received for 5\nI0506 21:07:46.372931 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.372968 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.372977 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.372996 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\nI0506 21:07:46.373005 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\nI0506 21:07:46.373014 3510 log.go:172] (0xc0001b8a00) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.214.197:80/\nI0506 21:07:46.375908 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.375932 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.375948 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.376384 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\nI0506 21:07:46.376405 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\nI0506 21:07:46.376420 3510 log.go:172] (0xc0001b8a00) (5) Data frame sent\n+ echo\n+ curl -q -sI0506 21:07:46.376518 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\nI0506 21:07:46.376529 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\nI0506 21:07:46.376536 3510 log.go:172] (0xc0001b8a00) (5) Data frame sent\n --connect-timeout 2 http://10.104.214.197:80/\nI0506 21:07:46.376547 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.376560 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.376584 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.383829 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.383849 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.383857 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.384509 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\nI0506 21:07:46.384526 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\nI0506 21:07:46.384539 3510 log.go:172] (0xc0001b8a00) (5) Data frame sent\n+ echo\nI0506 21:07:46.384988 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.385014 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.385040 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.385057 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\nI0506 21:07:46.385064 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\nI0506 21:07:46.385071 3510 log.go:172] (0xc0001b8a00) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.104.214.197:80/\nI0506 21:07:46.388931 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.388954 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.388978 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.389606 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.389634 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.389647 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.389681 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\nI0506 21:07:46.389709 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\nI0506 21:07:46.389727 3510 log.go:172] (0xc0001b8a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.214.197:80/\nI0506 21:07:46.393650 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.393670 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.393691 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.393962 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.393985 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.393996 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.394020 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\nI0506 21:07:46.394034 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\nI0506 21:07:46.394043 3510 log.go:172] (0xc0001b8a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.214.197:80/\nI0506 21:07:46.397607 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.397643 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.397666 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.398123 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.398156 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.398171 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.398185 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\nI0506 21:07:46.398197 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\nI0506 21:07:46.398212 3510 log.go:172] (0xc0001b8a00) (5) Data frame sent\nI0506 21:07:46.398222 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.214.197:80/I0506 21:07:46.398232 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\nI0506 21:07:46.398248 3510 log.go:172] (0xc0001b8a00) (5) Data frame sent\n\nI0506 21:07:46.405343 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.405405 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.405425 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.405833 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\nI0506 21:07:46.405846 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\nI0506 21:07:46.405852 3510 log.go:172] (0xc0001b8a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.214.197:80/\nI0506 21:07:46.405937 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.405961 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.405995 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.410206 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.410218 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.410230 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.410743 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\nI0506 21:07:46.410755 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\nI0506 21:07:46.410761 3510 log.go:172] (0xc0001b8a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.214.197:80/\nI0506 21:07:46.410769 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.410774 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.410787 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.416355 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.416383 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.416424 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.416747 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.416784 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.416798 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.416811 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\nI0506 21:07:46.416820 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\nI0506 21:07:46.416827 3510 log.go:172] (0xc0001b8a00) (5) Data frame sent\nI0506 21:07:46.416835 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\nI0506 21:07:46.416841 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.214.197:80/\nI0506 21:07:46.416853 3510 log.go:172] (0xc0001b8a00) (5) Data frame sent\nI0506 21:07:46.420752 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.420770 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.420799 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.421467 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.421499 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.421508 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.421519 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\nI0506 21:07:46.421525 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\nI0506 21:07:46.421531 3510 log.go:172] (0xc0001b8a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.214.197:80/\nI0506 21:07:46.425031 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.425056 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.425065 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.425703 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.425729 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.425745 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.425772 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\nI0506 21:07:46.425786 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\nI0506 21:07:46.425808 3510 log.go:172] (0xc0001b8a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.214.197:80/\nI0506 21:07:46.433437 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\nI0506 21:07:46.433471 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\nI0506 21:07:46.433482 3510 log.go:172] (0xc0001b8a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.214.197:80/\nI0506 21:07:46.433497 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.433510 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.433520 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.433529 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.433551 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.433565 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.436043 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.436054 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.436063 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.436510 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\nI0506 21:07:46.436525 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\nI0506 21:07:46.436532 3510 log.go:172] (0xc0001b8a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.214.197:80/\nI0506 21:07:46.436540 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.436545 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.436551 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.440576 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.440593 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.440606 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.440945 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\nI0506 21:07:46.440958 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\nI0506 21:07:46.440967 3510 log.go:172] (0xc0001b8a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.214.197:80/\nI0506 21:07:46.440982 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.440999 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.441013 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.444593 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.444609 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.444621 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.445373 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\nI0506 21:07:46.445394 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.445409 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.445424 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.445438 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\nI0506 21:07:46.445449 3510 log.go:172] (0xc0001b8a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.214.197:80/\nI0506 21:07:46.450027 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.450041 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.450049 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.451090 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\nI0506 21:07:46.451115 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\nI0506 21:07:46.451124 3510 log.go:172] (0xc0001b8a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.214.197:80/\nI0506 21:07:46.451132 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.451152 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.451163 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.454534 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.454556 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.454574 3510 log.go:172] (0xc00017ae60) (3) Data frame sent\nI0506 21:07:46.455188 3510 log.go:172] (0xc0000ecb00) Data frame received for 5\nI0506 21:07:46.455212 3510 log.go:172] (0xc0001b8a00) (5) Data frame handling\nI0506 21:07:46.455235 3510 log.go:172] (0xc0000ecb00) Data frame received for 3\nI0506 21:07:46.455261 3510 log.go:172] (0xc00017ae60) (3) Data frame handling\nI0506 21:07:46.456525 3510 log.go:172] (0xc0000ecb00) Data frame received for 1\nI0506 21:07:46.456549 3510 log.go:172] (0xc0001b8280) (1) Data frame handling\nI0506 21:07:46.456566 3510 log.go:172] (0xc0001b8280) (1) Data frame sent\nI0506 21:07:46.456591 3510 log.go:172] (0xc0000ecb00) (0xc0001b8280) Stream removed, broadcasting: 1\nI0506 21:07:46.456609 3510 log.go:172] (0xc0000ecb00) Go away received\nI0506 21:07:46.456870 3510 log.go:172] (0xc0000ecb00) (0xc0001b8280) Stream removed, broadcasting: 1\nI0506 21:07:46.456891 3510 log.go:172] (0xc0000ecb00) (0xc00017ae60) Stream removed, broadcasting: 3\nI0506 21:07:46.456902 3510 log.go:172] (0xc0000ecb00) (0xc0001b8a00) Stream removed, broadcasting: 5\n" May 6 21:07:46.462: INFO: stdout: "\naffinity-clusterip-timeout-7mwjc\naffinity-clusterip-timeout-7mwjc\naffinity-clusterip-timeout-7mwjc\naffinity-clusterip-timeout-7mwjc\naffinity-clusterip-timeout-7mwjc\naffinity-clusterip-timeout-7mwjc\naffinity-clusterip-timeout-7mwjc\naffinity-clusterip-timeout-7mwjc\naffinity-clusterip-timeout-7mwjc\naffinity-clusterip-timeout-7mwjc\naffinity-clusterip-timeout-7mwjc\naffinity-clusterip-timeout-7mwjc\naffinity-clusterip-timeout-7mwjc\naffinity-clusterip-timeout-7mwjc\naffinity-clusterip-timeout-7mwjc\naffinity-clusterip-timeout-7mwjc" May 6 21:07:46.462: INFO: Received response from host: May 6 21:07:46.462: INFO: Received response from host: affinity-clusterip-timeout-7mwjc May 6 21:07:46.462: INFO: Received response from host: affinity-clusterip-timeout-7mwjc May 6 21:07:46.462: INFO: Received response from host: affinity-clusterip-timeout-7mwjc May 6 21:07:46.462: INFO: Received response from host: affinity-clusterip-timeout-7mwjc May 6 21:07:46.462: INFO: Received response from host: affinity-clusterip-timeout-7mwjc May 6 21:07:46.462: INFO: Received response from host: affinity-clusterip-timeout-7mwjc May 6 21:07:46.462: INFO: Received response from host: affinity-clusterip-timeout-7mwjc May 6 21:07:46.462: INFO: Received response from host: affinity-clusterip-timeout-7mwjc May 6 21:07:46.462: INFO: Received response from host: affinity-clusterip-timeout-7mwjc May 6 21:07:46.462: INFO: Received response from host: affinity-clusterip-timeout-7mwjc May 6 21:07:46.462: INFO: Received response from host: affinity-clusterip-timeout-7mwjc May 6 21:07:46.462: INFO: Received response from host: affinity-clusterip-timeout-7mwjc May 6 21:07:46.462: INFO: Received response from host: affinity-clusterip-timeout-7mwjc May 6 21:07:46.462: INFO: Received response from host: affinity-clusterip-timeout-7mwjc May 6 21:07:46.462: INFO: Received response from host: affinity-clusterip-timeout-7mwjc May 6 21:07:46.462: INFO: Received response from host: affinity-clusterip-timeout-7mwjc May 6 21:07:46.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5911 execpod-affinity9gdgk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.104.214.197:80/' May 6 21:07:46.673: INFO: stderr: "I0506 21:07:46.598648 3531 log.go:172] (0xc000bf28f0) (0xc000515180) Create stream\nI0506 21:07:46.598715 3531 log.go:172] (0xc000bf28f0) (0xc000515180) Stream added, broadcasting: 1\nI0506 21:07:46.601675 3531 log.go:172] (0xc000bf28f0) Reply frame received for 1\nI0506 21:07:46.601765 3531 log.go:172] (0xc000bf28f0) (0xc000520500) Create stream\nI0506 21:07:46.601788 3531 log.go:172] (0xc000bf28f0) (0xc000520500) Stream added, broadcasting: 3\nI0506 21:07:46.602971 3531 log.go:172] (0xc000bf28f0) Reply frame received for 3\nI0506 21:07:46.603016 3531 log.go:172] (0xc000bf28f0) (0xc00053a1e0) Create stream\nI0506 21:07:46.603056 3531 log.go:172] (0xc000bf28f0) (0xc00053a1e0) Stream added, broadcasting: 5\nI0506 21:07:46.605859 3531 log.go:172] (0xc000bf28f0) Reply frame received for 5\nI0506 21:07:46.661990 3531 log.go:172] (0xc000bf28f0) Data frame received for 5\nI0506 21:07:46.662022 3531 log.go:172] (0xc00053a1e0) (5) Data frame handling\nI0506 21:07:46.662044 3531 log.go:172] (0xc00053a1e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.104.214.197:80/\nI0506 21:07:46.665942 3531 log.go:172] (0xc000bf28f0) Data frame received for 3\nI0506 21:07:46.665980 3531 log.go:172] (0xc000520500) (3) Data frame handling\nI0506 21:07:46.665996 3531 log.go:172] (0xc000520500) (3) Data frame sent\nI0506 21:07:46.667274 3531 log.go:172] (0xc000bf28f0) Data frame received for 5\nI0506 21:07:46.667292 3531 log.go:172] (0xc00053a1e0) (5) Data frame handling\nI0506 21:07:46.667371 3531 log.go:172] (0xc000bf28f0) Data frame received for 3\nI0506 21:07:46.667389 3531 log.go:172] (0xc000520500) (3) Data frame handling\nI0506 21:07:46.668853 3531 log.go:172] (0xc000bf28f0) Data frame received for 1\nI0506 21:07:46.668875 3531 log.go:172] (0xc000515180) (1) Data frame handling\nI0506 21:07:46.668888 3531 log.go:172] (0xc000515180) (1) Data frame sent\nI0506 21:07:46.668903 3531 log.go:172] (0xc000bf28f0) (0xc000515180) Stream removed, broadcasting: 1\nI0506 21:07:46.669361 3531 log.go:172] (0xc000bf28f0) (0xc000515180) Stream removed, broadcasting: 1\nI0506 21:07:46.669440 3531 log.go:172] (0xc000bf28f0) (0xc000520500) Stream removed, broadcasting: 3\nI0506 21:07:46.669454 3531 log.go:172] (0xc000bf28f0) (0xc00053a1e0) Stream removed, broadcasting: 5\n" May 6 21:07:46.673: INFO: stdout: "affinity-clusterip-timeout-7mwjc" May 6 21:08:01.673: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5911 execpod-affinity9gdgk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.104.214.197:80/' May 6 21:08:01.915: INFO: stderr: "I0506 21:08:01.849850 3554 log.go:172] (0xc000724840) (0xc0005741e0) Create stream\nI0506 21:08:01.849893 3554 log.go:172] (0xc000724840) (0xc0005741e0) Stream added, broadcasting: 1\nI0506 21:08:01.851406 3554 log.go:172] (0xc000724840) Reply frame received for 1\nI0506 21:08:01.851434 3554 log.go:172] (0xc000724840) (0xc00050ed20) Create stream\nI0506 21:08:01.851441 3554 log.go:172] (0xc000724840) (0xc00050ed20) Stream added, broadcasting: 3\nI0506 21:08:01.852060 3554 log.go:172] (0xc000724840) Reply frame received for 3\nI0506 21:08:01.852091 3554 log.go:172] (0xc000724840) (0xc000575180) Create stream\nI0506 21:08:01.852102 3554 log.go:172] (0xc000724840) (0xc000575180) Stream added, broadcasting: 5\nI0506 21:08:01.852713 3554 log.go:172] (0xc000724840) Reply frame received for 5\nI0506 21:08:01.900214 3554 log.go:172] (0xc000724840) Data frame received for 5\nI0506 21:08:01.900240 3554 log.go:172] (0xc000575180) (5) Data frame handling\nI0506 21:08:01.900255 3554 log.go:172] (0xc000575180) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.104.214.197:80/\nI0506 21:08:01.905511 3554 log.go:172] (0xc000724840) Data frame received for 3\nI0506 21:08:01.905536 3554 log.go:172] (0xc00050ed20) (3) Data frame handling\nI0506 21:08:01.905558 3554 log.go:172] (0xc00050ed20) (3) Data frame sent\nI0506 21:08:01.906552 3554 log.go:172] (0xc000724840) Data frame received for 3\nI0506 21:08:01.906567 3554 log.go:172] (0xc00050ed20) (3) Data frame handling\nI0506 21:08:01.906744 3554 log.go:172] (0xc000724840) Data frame received for 5\nI0506 21:08:01.906769 3554 log.go:172] (0xc000575180) (5) Data frame handling\nI0506 21:08:01.908777 3554 log.go:172] (0xc000724840) Data frame received for 1\nI0506 21:08:01.908794 3554 log.go:172] (0xc0005741e0) (1) Data frame handling\nI0506 21:08:01.908804 3554 log.go:172] (0xc0005741e0) (1) Data frame sent\nI0506 21:08:01.908820 3554 log.go:172] (0xc000724840) (0xc0005741e0) Stream removed, broadcasting: 1\nI0506 21:08:01.908916 3554 log.go:172] (0xc000724840) Go away received\nI0506 21:08:01.909297 3554 log.go:172] (0xc000724840) (0xc0005741e0) Stream removed, broadcasting: 1\nI0506 21:08:01.909318 3554 log.go:172] (0xc000724840) (0xc00050ed20) Stream removed, broadcasting: 3\nI0506 21:08:01.909333 3554 log.go:172] (0xc000724840) (0xc000575180) Stream removed, broadcasting: 5\n" May 6 21:08:01.915: INFO: stdout: "affinity-clusterip-timeout-cn8tf" May 6 21:08:01.915: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-5911, will wait for the garbage collector to delete the pods May 6 21:08:02.189: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 89.902039ms May 6 21:08:02.689: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 500.219883ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:08:16.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5911" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:60.265 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":238,"skipped":3792,"failed":0} SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:08:16.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 6 21:08:16.570: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 21:08:16.581: INFO: Waiting for terminating namespaces to be deleted... May 6 21:08:16.583: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 6 21:08:16.587: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 6 21:08:16.587: INFO: Container kindnet-cni ready: true, restart count 0 May 6 21:08:16.587: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 6 21:08:16.587: INFO: Container kube-proxy ready: true, restart count 0 May 6 21:08:16.587: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 6 21:08:16.592: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 6 21:08:16.592: INFO: Container kindnet-cni ready: true, restart count 0 May 6 21:08:16.592: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 6 21:08:16.592: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 May 6 21:08:16.715: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker May 6 21:08:16.715: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 May 6 21:08:16.715: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker May 6 21:08:16.715: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 6 21:08:16.715: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker May 6 21:08:16.722: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-aa026fab-eca2-4c4f-8b8c-0f7f2abede4b.160c8c8ae80c5aea], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7786/filler-pod-aa026fab-eca2-4c4f-8b8c-0f7f2abede4b to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-aa026fab-eca2-4c4f-8b8c-0f7f2abede4b.160c8c8b63d23b5d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-aa026fab-eca2-4c4f-8b8c-0f7f2abede4b.160c8c8ba5a32a70], Reason = [Created], Message = [Created container filler-pod-aa026fab-eca2-4c4f-8b8c-0f7f2abede4b] STEP: Considering event: Type = [Normal], Name = [filler-pod-aa026fab-eca2-4c4f-8b8c-0f7f2abede4b.160c8c8bb6dba831], Reason = [Started], Message = [Started container filler-pod-aa026fab-eca2-4c4f-8b8c-0f7f2abede4b] STEP: Considering event: Type = [Normal], Name = [filler-pod-e6d24edc-25ca-463e-855c-2df07f0eb7ff.160c8c8ae60e1d18], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7786/filler-pod-e6d24edc-25ca-463e-855c-2df07f0eb7ff to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-e6d24edc-25ca-463e-855c-2df07f0eb7ff.160c8c8b504c51a6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e6d24edc-25ca-463e-855c-2df07f0eb7ff.160c8c8b994e99ad], Reason = [Created], Message = [Created container filler-pod-e6d24edc-25ca-463e-855c-2df07f0eb7ff] STEP: Considering event: Type = [Normal], Name = [filler-pod-e6d24edc-25ca-463e-855c-2df07f0eb7ff.160c8c8bb06a4ae6], Reason = [Started], Message = [Started container filler-pod-e6d24edc-25ca-463e-855c-2df07f0eb7ff] STEP: Considering event: Type = [Warning], Name = [additional-pod.160c8c8c5991c307], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.160c8c8c655f2109], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:08:24.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7786" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.726 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":288,"completed":239,"skipped":3802,"failed":0} [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:08:24.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-ad8d9d86-afa9-486f-9420-7994dba59026 STEP: Creating a pod to test consume secrets May 6 21:08:24.381: INFO: Waiting up to 5m0s for pod "pod-secrets-e77de45e-cc45-45e4-beb1-b8ff5a8837f0" in namespace "secrets-6773" to be "Succeeded or Failed" May 6 21:08:24.454: INFO: Pod "pod-secrets-e77de45e-cc45-45e4-beb1-b8ff5a8837f0": Phase="Pending", Reason="", readiness=false. Elapsed: 72.645564ms May 6 21:08:26.477: INFO: Pod "pod-secrets-e77de45e-cc45-45e4-beb1-b8ff5a8837f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09611612s May 6 21:08:28.481: INFO: Pod "pod-secrets-e77de45e-cc45-45e4-beb1-b8ff5a8837f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09983781s May 6 21:08:30.484: INFO: Pod "pod-secrets-e77de45e-cc45-45e4-beb1-b8ff5a8837f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.103372267s STEP: Saw pod success May 6 21:08:30.484: INFO: Pod "pod-secrets-e77de45e-cc45-45e4-beb1-b8ff5a8837f0" satisfied condition "Succeeded or Failed" May 6 21:08:30.487: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-e77de45e-cc45-45e4-beb1-b8ff5a8837f0 container secret-volume-test: STEP: delete the pod May 6 21:08:30.519: INFO: Waiting for pod pod-secrets-e77de45e-cc45-45e4-beb1-b8ff5a8837f0 to disappear May 6 21:08:30.535: INFO: Pod pod-secrets-e77de45e-cc45-45e4-beb1-b8ff5a8837f0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:08:30.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6773" for this suite. STEP: Destroying namespace "secret-namespace-5381" for this suite. • [SLOW TEST:6.521 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":288,"completed":240,"skipped":3802,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:08:30.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:08:46.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3683" for this suite. STEP: Destroying namespace "nsdeletetest-8714" for this suite. May 6 21:08:46.732: INFO: Namespace nsdeletetest-8714 was already deleted STEP: Destroying namespace "nsdeletetest-7115" for this suite. • [SLOW TEST:16.117 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":288,"completed":241,"skipped":3807,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:08:46.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 6 21:08:46.982: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9539 /api/v1/namespaces/watch-9539/configmaps/e2e-watch-test-resource-version 831435e4-1920-4167-93da-0bfd161643ca 2103266 0 2020-05-06 21:08:46 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-06 21:08:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 6 21:08:46.982: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9539 /api/v1/namespaces/watch-9539/configmaps/e2e-watch-test-resource-version 831435e4-1920-4167-93da-0bfd161643ca 2103267 0 2020-05-06 21:08:46 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-06 21:08:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:08:46.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9539" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":288,"completed":242,"skipped":3834,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:08:47.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4903 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4903;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4903 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4903;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4903.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4903.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4903.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4903.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4903.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4903.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4903.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4903.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4903.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4903.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4903.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4903.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4903.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 16.243.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.243.16_udp@PTR;check="$$(dig +tcp +noall +answer +search 16.243.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.243.16_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4903 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4903;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4903 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4903;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4903.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4903.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4903.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4903.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4903.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4903.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4903.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4903.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4903.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4903.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4903.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4903.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4903.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 16.243.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.243.16_udp@PTR;check="$$(dig +tcp +noall +answer +search 16.243.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.243.16_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 21:08:55.303: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:08:55.306: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:08:55.310: INFO: Unable to read wheezy_udp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:08:55.313: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:08:55.316: INFO: Unable to read wheezy_udp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:08:55.319: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:08:55.323: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:08:55.326: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:08:55.348: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:08:55.351: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:08:55.354: INFO: Unable to read jessie_udp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:08:55.356: INFO: Unable to read jessie_tcp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:08:55.358: INFO: Unable to read jessie_udp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:08:55.360: INFO: Unable to read jessie_tcp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:08:55.363: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:08:55.365: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:08:55.381: INFO: Lookups using dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4903 wheezy_tcp@dns-test-service.dns-4903 wheezy_udp@dns-test-service.dns-4903.svc wheezy_tcp@dns-test-service.dns-4903.svc wheezy_udp@_http._tcp.dns-test-service.dns-4903.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4903.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4903 jessie_tcp@dns-test-service.dns-4903 jessie_udp@dns-test-service.dns-4903.svc jessie_tcp@dns-test-service.dns-4903.svc jessie_udp@_http._tcp.dns-test-service.dns-4903.svc jessie_tcp@_http._tcp.dns-test-service.dns-4903.svc] May 6 21:09:00.387: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:00.391: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:00.395: INFO: Unable to read wheezy_udp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:00.397: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:00.399: INFO: Unable to read wheezy_udp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:00.402: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:00.403: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:00.406: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:00.427: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:00.430: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:00.432: INFO: Unable to read jessie_udp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:00.435: INFO: Unable to read jessie_tcp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:00.438: INFO: Unable to read jessie_udp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:00.440: INFO: Unable to read jessie_tcp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:00.443: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:00.446: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:00.464: INFO: Lookups using dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4903 wheezy_tcp@dns-test-service.dns-4903 wheezy_udp@dns-test-service.dns-4903.svc wheezy_tcp@dns-test-service.dns-4903.svc wheezy_udp@_http._tcp.dns-test-service.dns-4903.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4903.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4903 jessie_tcp@dns-test-service.dns-4903 jessie_udp@dns-test-service.dns-4903.svc jessie_tcp@dns-test-service.dns-4903.svc jessie_udp@_http._tcp.dns-test-service.dns-4903.svc jessie_tcp@_http._tcp.dns-test-service.dns-4903.svc] May 6 21:09:05.385: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:05.388: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:05.391: INFO: Unable to read wheezy_udp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:05.395: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:05.398: INFO: Unable to read wheezy_udp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:05.402: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:05.405: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:05.408: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:05.432: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:05.435: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:05.438: INFO: Unable to read jessie_udp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:05.442: INFO: Unable to read jessie_tcp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:05.445: INFO: Unable to read jessie_udp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:05.449: INFO: Unable to read jessie_tcp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:05.452: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:05.456: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:05.477: INFO: Lookups using dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4903 wheezy_tcp@dns-test-service.dns-4903 wheezy_udp@dns-test-service.dns-4903.svc wheezy_tcp@dns-test-service.dns-4903.svc wheezy_udp@_http._tcp.dns-test-service.dns-4903.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4903.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4903 jessie_tcp@dns-test-service.dns-4903 jessie_udp@dns-test-service.dns-4903.svc jessie_tcp@dns-test-service.dns-4903.svc jessie_udp@_http._tcp.dns-test-service.dns-4903.svc jessie_tcp@_http._tcp.dns-test-service.dns-4903.svc] May 6 21:09:10.385: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:10.389: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:10.392: INFO: Unable to read wheezy_udp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:10.396: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:10.399: INFO: Unable to read wheezy_udp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:10.402: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:10.404: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:10.407: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:10.428: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:10.430: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:10.433: INFO: Unable to read jessie_udp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:10.435: INFO: Unable to read jessie_tcp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:10.438: INFO: Unable to read jessie_udp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:10.440: INFO: Unable to read jessie_tcp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:10.443: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:10.446: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:10.461: INFO: Lookups using dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4903 wheezy_tcp@dns-test-service.dns-4903 wheezy_udp@dns-test-service.dns-4903.svc wheezy_tcp@dns-test-service.dns-4903.svc wheezy_udp@_http._tcp.dns-test-service.dns-4903.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4903.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4903 jessie_tcp@dns-test-service.dns-4903 jessie_udp@dns-test-service.dns-4903.svc jessie_tcp@dns-test-service.dns-4903.svc jessie_udp@_http._tcp.dns-test-service.dns-4903.svc jessie_tcp@_http._tcp.dns-test-service.dns-4903.svc] May 6 21:09:15.386: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:15.390: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:15.394: INFO: Unable to read wheezy_udp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:15.398: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:15.401: INFO: Unable to read wheezy_udp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:15.404: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:15.408: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:15.411: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:15.434: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:15.438: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:15.441: INFO: Unable to read jessie_udp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:15.444: INFO: Unable to read jessie_tcp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:15.446: INFO: Unable to read jessie_udp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:15.449: INFO: Unable to read jessie_tcp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:15.452: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:15.455: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:15.472: INFO: Lookups using dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4903 wheezy_tcp@dns-test-service.dns-4903 wheezy_udp@dns-test-service.dns-4903.svc wheezy_tcp@dns-test-service.dns-4903.svc wheezy_udp@_http._tcp.dns-test-service.dns-4903.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4903.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4903 jessie_tcp@dns-test-service.dns-4903 jessie_udp@dns-test-service.dns-4903.svc jessie_tcp@dns-test-service.dns-4903.svc jessie_udp@_http._tcp.dns-test-service.dns-4903.svc jessie_tcp@_http._tcp.dns-test-service.dns-4903.svc] May 6 21:09:20.386: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:20.389: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:20.392: INFO: Unable to read wheezy_udp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:20.395: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:20.398: INFO: Unable to read wheezy_udp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:20.402: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:20.405: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:20.408: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:20.428: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:20.431: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:20.434: INFO: Unable to read jessie_udp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:20.437: INFO: Unable to read jessie_tcp@dns-test-service.dns-4903 from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:20.440: INFO: Unable to read jessie_udp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:20.443: INFO: Unable to read jessie_tcp@dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:20.445: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:20.448: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4903.svc from pod dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087: the server could not find the requested resource (get pods dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087) May 6 21:09:20.474: INFO: Lookups using dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4903 wheezy_tcp@dns-test-service.dns-4903 wheezy_udp@dns-test-service.dns-4903.svc wheezy_tcp@dns-test-service.dns-4903.svc wheezy_udp@_http._tcp.dns-test-service.dns-4903.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4903.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4903 jessie_tcp@dns-test-service.dns-4903 jessie_udp@dns-test-service.dns-4903.svc jessie_tcp@dns-test-service.dns-4903.svc jessie_udp@_http._tcp.dns-test-service.dns-4903.svc jessie_tcp@_http._tcp.dns-test-service.dns-4903.svc] May 6 21:09:25.599: INFO: DNS probes using dns-4903/dns-test-07cf6b87-4632-41d6-99ba-6451bc6ca087 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:09:26.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4903" for this suite. • [SLOW TEST:39.776 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":288,"completed":243,"skipped":3852,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:09:26.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:09:27.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3091" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":288,"completed":244,"skipped":3878,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:09:27.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-f2e95fc7-22ec-4e18-a480-fa9d67d7fb5b STEP: Creating a pod to test consume secrets May 6 21:09:27.540: INFO: Waiting up to 5m0s for pod "pod-secrets-aa6cd012-2b72-4b9b-946d-b401ba30c709" in namespace "secrets-9322" to be "Succeeded or Failed" May 6 21:09:27.550: INFO: Pod "pod-secrets-aa6cd012-2b72-4b9b-946d-b401ba30c709": Phase="Pending", Reason="", readiness=false. Elapsed: 9.862565ms May 6 21:09:29.723: INFO: Pod "pod-secrets-aa6cd012-2b72-4b9b-946d-b401ba30c709": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182801561s May 6 21:09:31.741: INFO: Pod "pod-secrets-aa6cd012-2b72-4b9b-946d-b401ba30c709": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201439043s May 6 21:09:33.802: INFO: Pod "pod-secrets-aa6cd012-2b72-4b9b-946d-b401ba30c709": Phase="Pending", Reason="", readiness=false. Elapsed: 6.262299537s May 6 21:09:35.807: INFO: Pod "pod-secrets-aa6cd012-2b72-4b9b-946d-b401ba30c709": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.267492517s STEP: Saw pod success May 6 21:09:35.807: INFO: Pod "pod-secrets-aa6cd012-2b72-4b9b-946d-b401ba30c709" satisfied condition "Succeeded or Failed" May 6 21:09:35.810: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-aa6cd012-2b72-4b9b-946d-b401ba30c709 container secret-volume-test: STEP: delete the pod May 6 21:09:35.844: INFO: Waiting for pod pod-secrets-aa6cd012-2b72-4b9b-946d-b401ba30c709 to disappear May 6 21:09:35.847: INFO: Pod pod-secrets-aa6cd012-2b72-4b9b-946d-b401ba30c709 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:09:35.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9322" for this suite. • [SLOW TEST:8.512 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":245,"skipped":3882,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:09:35.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 6 21:09:36.614: INFO: Pod name wrapped-volume-race-edc5285a-87dd-47b9-aa87-c6808a07238e: Found 0 pods out of 5 May 6 21:09:41.621: INFO: Pod name wrapped-volume-race-edc5285a-87dd-47b9-aa87-c6808a07238e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-edc5285a-87dd-47b9-aa87-c6808a07238e in namespace emptydir-wrapper-332, will wait for the garbage collector to delete the pods May 6 21:09:58.713: INFO: Deleting ReplicationController wrapped-volume-race-edc5285a-87dd-47b9-aa87-c6808a07238e took: 246.988976ms May 6 21:09:59.513: INFO: Terminating ReplicationController wrapped-volume-race-edc5285a-87dd-47b9-aa87-c6808a07238e pods took: 800.233331ms STEP: Creating RC which spawns configmap-volume pods May 6 21:10:15.670: INFO: Pod name wrapped-volume-race-636815f8-570d-4a70-b29e-4b1d7e0420e1: Found 0 pods out of 5 May 6 21:10:20.846: INFO: Pod name wrapped-volume-race-636815f8-570d-4a70-b29e-4b1d7e0420e1: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-636815f8-570d-4a70-b29e-4b1d7e0420e1 in namespace emptydir-wrapper-332, will wait for the garbage collector to delete the pods May 6 21:10:34.926: INFO: Deleting ReplicationController wrapped-volume-race-636815f8-570d-4a70-b29e-4b1d7e0420e1 took: 5.188474ms May 6 21:10:35.326: INFO: Terminating ReplicationController wrapped-volume-race-636815f8-570d-4a70-b29e-4b1d7e0420e1 pods took: 400.235517ms STEP: Creating RC which spawns configmap-volume pods May 6 21:10:47.808: INFO: Pod name wrapped-volume-race-7d168f3e-b0bf-4aca-bb4d-9be7e04f4b66: Found 0 pods out of 5 May 6 21:10:52.894: INFO: Pod name wrapped-volume-race-7d168f3e-b0bf-4aca-bb4d-9be7e04f4b66: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7d168f3e-b0bf-4aca-bb4d-9be7e04f4b66 in namespace emptydir-wrapper-332, will wait for the garbage collector to delete the pods May 6 21:11:15.970: INFO: Deleting ReplicationController wrapped-volume-race-7d168f3e-b0bf-4aca-bb4d-9be7e04f4b66 took: 183.759841ms May 6 21:11:16.671: INFO: Terminating ReplicationController wrapped-volume-race-7d168f3e-b0bf-4aca-bb4d-9be7e04f4b66 pods took: 700.260826ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:11:44.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-332" for this suite. • [SLOW TEST:129.340 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":288,"completed":246,"skipped":3885,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:11:45.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 21:11:45.931: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:11:58.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2424" for this suite. • [SLOW TEST:13.259 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":288,"completed":247,"skipped":3896,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:11:58.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 6 21:11:58.877: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 21:11:59.048: INFO: Waiting for terminating namespaces to be deleted... May 6 21:11:59.053: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 6 21:11:59.062: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 6 21:11:59.062: INFO: Container kindnet-cni ready: true, restart count 0 May 6 21:11:59.062: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 6 21:11:59.062: INFO: Container kube-proxy ready: true, restart count 0 May 6 21:11:59.062: INFO: pod-logs-websocket-b34f38f3-7b18-4792-aa06-a2fa1576c223 from pods-2424 started at 2020-05-06 21:11:46 +0000 UTC (1 container statuses recorded) May 6 21:11:59.062: INFO: Container main ready: true, restart count 0 May 6 21:11:59.062: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 6 21:11:59.066: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 6 21:11:59.066: INFO: Container kindnet-cni ready: true, restart count 0 May 6 21:11:59.066: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 6 21:11:59.066: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7882cb19-9238-48d3-8f68-08f4297f47ac 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-7882cb19-9238-48d3-8f68-08f4297f47ac off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-7882cb19-9238-48d3-8f68-08f4297f47ac [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:12:17.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8103" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:19.625 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":288,"completed":248,"skipped":3902,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:12:18.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9966 STEP: creating service affinity-clusterip in namespace services-9966 STEP: creating replication controller affinity-clusterip in namespace services-9966 I0506 21:12:19.259252 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-9966, replica count: 3 I0506 21:12:22.309597 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 21:12:25.309808 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 21:12:28.310033 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 21:12:28.377: INFO: Creating new exec pod May 6 21:12:35.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9966 execpod-affinityg4s2m -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' May 6 21:12:39.400: INFO: stderr: "I0506 21:12:39.330497 3574 log.go:172] (0xc00003b080) (0xc0006e77c0) Create stream\nI0506 21:12:39.330533 3574 log.go:172] (0xc00003b080) (0xc0006e77c0) Stream added, broadcasting: 1\nI0506 21:12:39.332778 3574 log.go:172] (0xc00003b080) Reply frame received for 1\nI0506 21:12:39.332820 3574 log.go:172] (0xc00003b080) (0xc0006ba0a0) Create stream\nI0506 21:12:39.332835 3574 log.go:172] (0xc00003b080) (0xc0006ba0a0) Stream added, broadcasting: 3\nI0506 21:12:39.333908 3574 log.go:172] (0xc00003b080) Reply frame received for 3\nI0506 21:12:39.333960 3574 log.go:172] (0xc00003b080) (0xc0006b4780) Create stream\nI0506 21:12:39.333974 3574 log.go:172] (0xc00003b080) (0xc0006b4780) Stream added, broadcasting: 5\nI0506 21:12:39.334815 3574 log.go:172] (0xc00003b080) Reply frame received for 5\nI0506 21:12:39.393037 3574 log.go:172] (0xc00003b080) Data frame received for 5\nI0506 21:12:39.393062 3574 log.go:172] (0xc0006b4780) (5) Data frame handling\nI0506 21:12:39.393083 3574 log.go:172] (0xc0006b4780) (5) Data frame sent\nI0506 21:12:39.393095 3574 log.go:172] (0xc00003b080) Data frame received for 5\nI0506 21:12:39.393107 3574 log.go:172] (0xc0006b4780) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0506 21:12:39.393385 3574 log.go:172] (0xc0006b4780) (5) Data frame sent\nI0506 21:12:39.393461 3574 log.go:172] (0xc00003b080) Data frame received for 5\nI0506 21:12:39.393477 3574 log.go:172] (0xc0006b4780) (5) Data frame handling\nI0506 21:12:39.393609 3574 log.go:172] (0xc00003b080) Data frame received for 3\nI0506 21:12:39.393627 3574 log.go:172] (0xc0006ba0a0) (3) Data frame handling\nI0506 21:12:39.395101 3574 log.go:172] (0xc00003b080) Data frame received for 1\nI0506 21:12:39.395116 3574 log.go:172] (0xc0006e77c0) (1) Data frame handling\nI0506 21:12:39.395128 3574 log.go:172] (0xc0006e77c0) (1) Data frame sent\nI0506 21:12:39.395138 3574 log.go:172] (0xc00003b080) (0xc0006e77c0) Stream removed, broadcasting: 1\nI0506 21:12:39.395153 3574 log.go:172] (0xc00003b080) Go away received\nI0506 21:12:39.395505 3574 log.go:172] (0xc00003b080) (0xc0006e77c0) Stream removed, broadcasting: 1\nI0506 21:12:39.395530 3574 log.go:172] (0xc00003b080) (0xc0006ba0a0) Stream removed, broadcasting: 3\nI0506 21:12:39.395543 3574 log.go:172] (0xc00003b080) (0xc0006b4780) Stream removed, broadcasting: 5\n" May 6 21:12:39.400: INFO: stdout: "" May 6 21:12:39.400: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9966 execpod-affinityg4s2m -- /bin/sh -x -c nc -zv -t -w 2 10.102.188.53 80' May 6 21:12:39.576: INFO: stderr: "I0506 21:12:39.515885 3606 log.go:172] (0xc000ab51e0) (0xc00044f540) Create stream\nI0506 21:12:39.515927 3606 log.go:172] (0xc000ab51e0) (0xc00044f540) Stream added, broadcasting: 1\nI0506 21:12:39.517446 3606 log.go:172] (0xc000ab51e0) Reply frame received for 1\nI0506 21:12:39.517468 3606 log.go:172] (0xc000ab51e0) (0xc00044fb80) Create stream\nI0506 21:12:39.517475 3606 log.go:172] (0xc000ab51e0) (0xc00044fb80) Stream added, broadcasting: 3\nI0506 21:12:39.518017 3606 log.go:172] (0xc000ab51e0) Reply frame received for 3\nI0506 21:12:39.518037 3606 log.go:172] (0xc000ab51e0) (0xc0002ba140) Create stream\nI0506 21:12:39.518046 3606 log.go:172] (0xc000ab51e0) (0xc0002ba140) Stream added, broadcasting: 5\nI0506 21:12:39.518626 3606 log.go:172] (0xc000ab51e0) Reply frame received for 5\nI0506 21:12:39.570034 3606 log.go:172] (0xc000ab51e0) Data frame received for 3\nI0506 21:12:39.570056 3606 log.go:172] (0xc00044fb80) (3) Data frame handling\nI0506 21:12:39.570080 3606 log.go:172] (0xc000ab51e0) Data frame received for 5\nI0506 21:12:39.570113 3606 log.go:172] (0xc0002ba140) (5) Data frame handling\nI0506 21:12:39.570147 3606 log.go:172] (0xc0002ba140) (5) Data frame sent\nI0506 21:12:39.570169 3606 log.go:172] (0xc000ab51e0) Data frame received for 5\nI0506 21:12:39.570186 3606 log.go:172] (0xc0002ba140) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.188.53 80\nConnection to 10.102.188.53 80 port [tcp/http] succeeded!\nI0506 21:12:39.571284 3606 log.go:172] (0xc000ab51e0) Data frame received for 1\nI0506 21:12:39.571305 3606 log.go:172] (0xc00044f540) (1) Data frame handling\nI0506 21:12:39.571324 3606 log.go:172] (0xc00044f540) (1) Data frame sent\nI0506 21:12:39.571343 3606 log.go:172] (0xc000ab51e0) (0xc00044f540) Stream removed, broadcasting: 1\nI0506 21:12:39.571386 3606 log.go:172] (0xc000ab51e0) Go away received\nI0506 21:12:39.571657 3606 log.go:172] (0xc000ab51e0) (0xc00044f540) Stream removed, broadcasting: 1\nI0506 21:12:39.571669 3606 log.go:172] (0xc000ab51e0) (0xc00044fb80) Stream removed, broadcasting: 3\nI0506 21:12:39.571675 3606 log.go:172] (0xc000ab51e0) (0xc0002ba140) Stream removed, broadcasting: 5\n" May 6 21:12:39.576: INFO: stdout: "" May 6 21:12:39.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9966 execpod-affinityg4s2m -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.188.53:80/ ; done' May 6 21:12:39.855: INFO: stderr: "I0506 21:12:39.702777 3625 log.go:172] (0xc00003a420) (0xc0004a6820) Create stream\nI0506 21:12:39.702846 3625 log.go:172] (0xc00003a420) (0xc0004a6820) Stream added, broadcasting: 1\nI0506 21:12:39.705720 3625 log.go:172] (0xc00003a420) Reply frame received for 1\nI0506 21:12:39.705781 3625 log.go:172] (0xc00003a420) (0xc00047ac80) Create stream\nI0506 21:12:39.705804 3625 log.go:172] (0xc00003a420) (0xc00047ac80) Stream added, broadcasting: 3\nI0506 21:12:39.706707 3625 log.go:172] (0xc00003a420) Reply frame received for 3\nI0506 21:12:39.706759 3625 log.go:172] (0xc00003a420) (0xc00039c6e0) Create stream\nI0506 21:12:39.706793 3625 log.go:172] (0xc00003a420) (0xc00039c6e0) Stream added, broadcasting: 5\nI0506 21:12:39.707515 3625 log.go:172] (0xc00003a420) Reply frame received for 5\nI0506 21:12:39.768824 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.768852 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0506 21:12:39.768861 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.188.53:80/\nI0506 21:12:39.768874 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.768880 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.768887 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.774521 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.774561 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.774586 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.775028 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.775049 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0506 21:12:39.775063 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.188.53:80/\nI0506 21:12:39.775152 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.775167 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.775177 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.779968 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.779993 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.780016 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.780143 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.780164 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.780174 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.780182 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.780187 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0506 21:12:39.780193 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0506 21:12:39.780198 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.780202 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.188.53:80/\nI0506 21:12:39.780211 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0506 21:12:39.785536 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.785567 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0506 21:12:39.785580 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.188.53:80/\nI0506 21:12:39.785601 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.785614 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.785632 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.785649 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.785659 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.785679 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.789968 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.789982 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.789993 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.790340 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.790365 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0506 21:12:39.790373 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.188.53:80/\nI0506 21:12:39.790385 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.790393 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.790399 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.794102 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.794124 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.794135 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.794503 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.794543 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.188.53:80/\nI0506 21:12:39.794574 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.794598 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.794620 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.794649 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0506 21:12:39.798770 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.798786 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.798794 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.799445 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.799471 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.799481 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.799496 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.799504 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0506 21:12:39.799522 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.188.53:80/\nI0506 21:12:39.803128 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.803144 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.803161 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.803650 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.803666 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.803672 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.803681 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.803690 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0506 21:12:39.803695 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.188.53:80/\nI0506 21:12:39.807719 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.807743 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.807762 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.808109 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.808128 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.808143 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.808160 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.808166 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0506 21:12:39.808174 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0506 21:12:39.808181 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.808188 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.188.53:80/\nI0506 21:12:39.808222 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0506 21:12:39.812083 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.812111 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.812143 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.812463 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.812488 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.812533 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0506 21:12:39.812553 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0506 21:12:39.812575 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.812591 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.188.53:80/\nI0506 21:12:39.812613 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.812641 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.812679 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0506 21:12:39.816493 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.816532 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.816556 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.816833 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.816850 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0506 21:12:39.816861 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0506 21:12:39.816870 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.816879 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0506 21:12:39.816894 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.816907 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\n+ echo\n+ curl -q -sI0506 21:12:39.816923 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\n --connect-timeout 2 http://10.102.188.53:80/\nI0506 21:12:39.816953 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0506 21:12:39.821302 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.821328 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.821347 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.821927 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.821942 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0506 21:12:39.821954 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.188.53:80/\nI0506 21:12:39.822066 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.822076 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.822083 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.826294 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.826308 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.826316 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.826776 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.826845 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.826873 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.826902 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.826924 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0506 21:12:39.826949 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.188.53:80/\nI0506 21:12:39.830794 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.830812 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.830825 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.831188 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.831202 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.831213 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.831223 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.831234 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0506 21:12:39.831243 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0506 21:12:39.831254 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.831266 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.188.53:80/\nI0506 21:12:39.831284 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0506 21:12:39.834168 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.834188 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.834203 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.835202 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.835229 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0506 21:12:39.835243 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0506 21:12:39.835253 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.835259 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.188.53:80/\nI0506 21:12:39.835270 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0506 21:12:39.835276 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.835280 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.835284 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.842026 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.842050 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.842074 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.842951 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.842965 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0506 21:12:39.842972 3625 log.go:172] (0xc00039c6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.188.53:80/\nI0506 21:12:39.843112 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.843127 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.843155 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.848319 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.848334 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.848349 3625 log.go:172] (0xc00047ac80) (3) Data frame sent\nI0506 21:12:39.848920 3625 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:12:39.848961 3625 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0506 21:12:39.848997 3625 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:12:39.849006 3625 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0506 21:12:39.850556 3625 log.go:172] (0xc00003a420) Data frame received for 1\nI0506 21:12:39.850569 3625 log.go:172] (0xc0004a6820) (1) Data frame handling\nI0506 21:12:39.850585 3625 log.go:172] (0xc0004a6820) (1) Data frame sent\nI0506 21:12:39.850726 3625 log.go:172] (0xc00003a420) (0xc0004a6820) Stream removed, broadcasting: 1\nI0506 21:12:39.850747 3625 log.go:172] (0xc00003a420) Go away received\nI0506 21:12:39.850989 3625 log.go:172] (0xc00003a420) (0xc0004a6820) Stream removed, broadcasting: 1\nI0506 21:12:39.851002 3625 log.go:172] (0xc00003a420) (0xc00047ac80) Stream removed, broadcasting: 3\nI0506 21:12:39.851008 3625 log.go:172] (0xc00003a420) (0xc00039c6e0) Stream removed, broadcasting: 5\n" May 6 21:12:39.855: INFO: stdout: "\naffinity-clusterip-jvfrt\naffinity-clusterip-jvfrt\naffinity-clusterip-jvfrt\naffinity-clusterip-jvfrt\naffinity-clusterip-jvfrt\naffinity-clusterip-jvfrt\naffinity-clusterip-jvfrt\naffinity-clusterip-jvfrt\naffinity-clusterip-jvfrt\naffinity-clusterip-jvfrt\naffinity-clusterip-jvfrt\naffinity-clusterip-jvfrt\naffinity-clusterip-jvfrt\naffinity-clusterip-jvfrt\naffinity-clusterip-jvfrt\naffinity-clusterip-jvfrt" May 6 21:12:39.855: INFO: Received response from host: May 6 21:12:39.855: INFO: Received response from host: affinity-clusterip-jvfrt May 6 21:12:39.855: INFO: Received response from host: affinity-clusterip-jvfrt May 6 21:12:39.855: INFO: Received response from host: affinity-clusterip-jvfrt May 6 21:12:39.855: INFO: Received response from host: affinity-clusterip-jvfrt May 6 21:12:39.855: INFO: Received response from host: affinity-clusterip-jvfrt May 6 21:12:39.855: INFO: Received response from host: affinity-clusterip-jvfrt May 6 21:12:39.855: INFO: Received response from host: affinity-clusterip-jvfrt May 6 21:12:39.855: INFO: Received response from host: affinity-clusterip-jvfrt May 6 21:12:39.855: INFO: Received response from host: affinity-clusterip-jvfrt May 6 21:12:39.855: INFO: Received response from host: affinity-clusterip-jvfrt May 6 21:12:39.855: INFO: Received response from host: affinity-clusterip-jvfrt May 6 21:12:39.855: INFO: Received response from host: affinity-clusterip-jvfrt May 6 21:12:39.855: INFO: Received response from host: affinity-clusterip-jvfrt May 6 21:12:39.855: INFO: Received response from host: affinity-clusterip-jvfrt May 6 21:12:39.855: INFO: Received response from host: affinity-clusterip-jvfrt May 6 21:12:39.855: INFO: Received response from host: affinity-clusterip-jvfrt May 6 21:12:39.855: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-9966, will wait for the garbage collector to delete the pods May 6 21:12:40.645: INFO: Deleting ReplicationController affinity-clusterip took: 557.41641ms May 6 21:12:41.345: INFO: Terminating ReplicationController affinity-clusterip pods took: 700.276484ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:12:57.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9966" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:39.368 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":249,"skipped":3916,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:12:57.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 21:12:58.282: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 21:13:00.749: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396378, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396378, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396378, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396378, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 21:13:02.768: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396378, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396378, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396378, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396378, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 21:13:05.815: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 21:13:05.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1083-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:13:07.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7113" for this suite. STEP: Destroying namespace "webhook-7113-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.889 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":288,"completed":250,"skipped":3928,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:13:08.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 21:13:09.031: INFO: Waiting up to 5m0s for pod "busybox-user-65534-6a7b2b5a-f8cb-4812-8078-ad5b8c00d599" in namespace "security-context-test-752" to be "Succeeded or Failed" May 6 21:13:09.204: INFO: Pod "busybox-user-65534-6a7b2b5a-f8cb-4812-8078-ad5b8c00d599": Phase="Pending", Reason="", readiness=false. Elapsed: 172.756611ms May 6 21:13:11.209: INFO: Pod "busybox-user-65534-6a7b2b5a-f8cb-4812-8078-ad5b8c00d599": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177571575s May 6 21:13:13.524: INFO: Pod "busybox-user-65534-6a7b2b5a-f8cb-4812-8078-ad5b8c00d599": Phase="Pending", Reason="", readiness=false. Elapsed: 4.492965516s May 6 21:13:15.803: INFO: Pod "busybox-user-65534-6a7b2b5a-f8cb-4812-8078-ad5b8c00d599": Phase="Running", Reason="", readiness=true. Elapsed: 6.771635482s May 6 21:13:17.807: INFO: Pod "busybox-user-65534-6a7b2b5a-f8cb-4812-8078-ad5b8c00d599": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.775519686s May 6 21:13:17.807: INFO: Pod "busybox-user-65534-6a7b2b5a-f8cb-4812-8078-ad5b8c00d599" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:13:17.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-752" for this suite. • [SLOW TEST:9.568 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":251,"skipped":3961,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:13:17.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 6 21:13:18.327: INFO: Waiting up to 5m0s for pod "pod-559f26b6-56cc-400a-bb93-84f4175e42f4" in namespace "emptydir-8608" to be "Succeeded or Failed" May 6 21:13:18.369: INFO: Pod "pod-559f26b6-56cc-400a-bb93-84f4175e42f4": Phase="Pending", Reason="", readiness=false. Elapsed: 41.919861ms May 6 21:13:20.373: INFO: Pod "pod-559f26b6-56cc-400a-bb93-84f4175e42f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045869776s May 6 21:13:22.378: INFO: Pod "pod-559f26b6-56cc-400a-bb93-84f4175e42f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050555824s May 6 21:13:24.420: INFO: Pod "pod-559f26b6-56cc-400a-bb93-84f4175e42f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092081358s May 6 21:13:26.423: INFO: Pod "pod-559f26b6-56cc-400a-bb93-84f4175e42f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.095599007s STEP: Saw pod success May 6 21:13:26.423: INFO: Pod "pod-559f26b6-56cc-400a-bb93-84f4175e42f4" satisfied condition "Succeeded or Failed" May 6 21:13:26.425: INFO: Trying to get logs from node latest-worker pod pod-559f26b6-56cc-400a-bb93-84f4175e42f4 container test-container: STEP: delete the pod May 6 21:13:26.607: INFO: Waiting for pod pod-559f26b6-56cc-400a-bb93-84f4175e42f4 to disappear May 6 21:13:26.670: INFO: Pod pod-559f26b6-56cc-400a-bb93-84f4175e42f4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:13:26.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8608" for this suite. • [SLOW TEST:8.860 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":252,"skipped":3973,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:13:26.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 21:13:27.600: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 21:13:29.629: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396407, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396407, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396407, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396407, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 21:13:31.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396407, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396407, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396407, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396407, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 21:13:34.743: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:13:34.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2963" for this suite. STEP: Destroying namespace "webhook-2963-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.301 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":288,"completed":253,"skipped":4018,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:13:35.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 21:13:35.209: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:13:41.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2790" for this suite. • [SLOW TEST:6.356 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":288,"completed":254,"skipped":4051,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:13:41.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:13:41.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-6711" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":288,"completed":255,"skipped":4063,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:13:41.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 6 21:13:42.642: INFO: Waiting up to 5m0s for pod "downward-api-9c945e03-b2e1-41f9-9cd3-0a4b3e74db52" in namespace "downward-api-973" to be "Succeeded or Failed" May 6 21:13:42.703: INFO: Pod "downward-api-9c945e03-b2e1-41f9-9cd3-0a4b3e74db52": Phase="Pending", Reason="", readiness=false. Elapsed: 60.851184ms May 6 21:13:44.707: INFO: Pod "downward-api-9c945e03-b2e1-41f9-9cd3-0a4b3e74db52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064813112s May 6 21:13:46.720: INFO: Pod "downward-api-9c945e03-b2e1-41f9-9cd3-0a4b3e74db52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078075008s STEP: Saw pod success May 6 21:13:46.720: INFO: Pod "downward-api-9c945e03-b2e1-41f9-9cd3-0a4b3e74db52" satisfied condition "Succeeded or Failed" May 6 21:13:46.726: INFO: Trying to get logs from node latest-worker pod downward-api-9c945e03-b2e1-41f9-9cd3-0a4b3e74db52 container dapi-container: STEP: delete the pod May 6 21:13:46.762: INFO: Waiting for pod downward-api-9c945e03-b2e1-41f9-9cd3-0a4b3e74db52 to disappear May 6 21:13:46.779: INFO: Pod downward-api-9c945e03-b2e1-41f9-9cd3-0a4b3e74db52 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:13:46.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-973" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":288,"completed":256,"skipped":4103,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:13:46.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:14:04.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7776" for this suite. • [SLOW TEST:18.006 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":288,"completed":257,"skipped":4128,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:14:04.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 21:14:05.394: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 6 21:14:08.122: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:14:09.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3569" for this suite. • [SLOW TEST:5.359 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":288,"completed":258,"skipped":4148,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:14:10.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80 May 6 21:14:12.684: INFO: Waiting up to 1m0s for all nodes to be ready May 6 21:15:12.710: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:15:12.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 6 21:15:17.261: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 21:15:45.502: INFO: pods created so far: [1 1 1] May 6 21:15:45.502: INFO: length of pods created so far: 3 May 6 21:15:59.514: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:16:06.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-2695" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:16:06.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7649" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74 • [SLOW TEST:116.576 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":288,"completed":259,"skipped":4155,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:16:06.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 21:16:06.857: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 6 21:16:11.863: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 6 21:16:11.863: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 6 21:16:12.009: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9598 /apis/apps/v1/namespaces/deployment-9598/deployments/test-cleanup-deployment 48c893be-a195-4503-ad1c-c471448f160a 2106083 1 2020-05-06 21:16:11 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-05-06 21:16:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035460c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 6 21:16:12.106: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694 deployment-9598 /apis/apps/v1/namespaces/deployment-9598/replicasets/test-cleanup-deployment-6688745694 3c1a683a-26b9-4331-be0d-236727d719f3 2106092 1 2020-05-06 21:16:11 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 48c893be-a195-4503-ad1c-c471448f160a 0xc002dff847 0xc002dff848}] [] [{kube-controller-manager Update apps/v1 2020-05-06 21:16:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48c893be-a195-4503-ad1c-c471448f160a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002dff8d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 21:16:12.106: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 6 21:16:12.106: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-9598 /apis/apps/v1/namespaces/deployment-9598/replicasets/test-cleanup-controller 396c9db1-ac38-435b-80bc-7cdd140e7358 2106084 1 2020-05-06 21:16:06 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 48c893be-a195-4503-ad1c-c471448f160a 0xc002dff72f 0xc002dff740}] [] [{e2e.test Update apps/v1 2020-05-06 21:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-06 21:16:11 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"48c893be-a195-4503-ad1c-c471448f160a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002dff7d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 6 21:16:12.190: INFO: Pod "test-cleanup-controller-rfxps" is available: &Pod{ObjectMeta:{test-cleanup-controller-rfxps test-cleanup-controller- deployment-9598 /api/v1/namespaces/deployment-9598/pods/test-cleanup-controller-rfxps 1e361692-85c2-4a85-b181-9ec3b7bf7858 2106057 0 2020-05-06 21:16:06 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 396c9db1-ac38-435b-80bc-7cdd140e7358 0xc0036d56e7 0xc0036d56e8}] [] [{kube-controller-manager Update v1 2020-05-06 21:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"396c9db1-ac38-435b-80bc-7cdd140e7358\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-06 21:16:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.45\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvf8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvf8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvf8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 21:16:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 21:16:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 21:16:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 21:16:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.45,StartTime:2020-05-06 21:16:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 21:16:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bd5c50d29d3d555deed074f8e1869dc48b4e65b27f357f94b7a7bae150d1defc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.45,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 21:16:12.191: INFO: Pod "test-cleanup-deployment-6688745694-k5lxb" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-6688745694-k5lxb test-cleanup-deployment-6688745694- deployment-9598 /api/v1/namespaces/deployment-9598/pods/test-cleanup-deployment-6688745694-k5lxb 0025c1d9-2540-44c3-b1a2-67ab912310ed 2106091 0 2020-05-06 21:16:11 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 3c1a683a-26b9-4331-be0d-236727d719f3 0xc0036d5977 0xc0036d5978}] [] [{kube-controller-manager Update v1 2020-05-06 21:16:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3c1a683a-26b9-4331-be0d-236727d719f3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvf8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvf8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvf8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 21:16:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:16:12.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9598" for this suite. • [SLOW TEST:5.486 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":288,"completed":260,"skipped":4202,"failed":0} [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:16:12.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7369 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7369 I0506 21:16:12.499395 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7369, replica count: 2 I0506 21:16:15.549823 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 21:16:18.550074 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 21:16:21.550364 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 21:16:21.550: INFO: Creating new exec pod May 6 21:16:28.598: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7369 execpodr6sxg -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 6 21:16:29.238: INFO: stderr: "I0506 21:16:28.984172 3646 log.go:172] (0xc0009d8840) (0xc000476d20) Create stream\nI0506 21:16:28.984228 3646 log.go:172] (0xc0009d8840) (0xc000476d20) Stream added, broadcasting: 1\nI0506 21:16:28.986620 3646 log.go:172] (0xc0009d8840) Reply frame received for 1\nI0506 21:16:28.986667 3646 log.go:172] (0xc0009d8840) (0xc00033e780) Create stream\nI0506 21:16:28.986678 3646 log.go:172] (0xc0009d8840) (0xc00033e780) Stream added, broadcasting: 3\nI0506 21:16:28.987690 3646 log.go:172] (0xc0009d8840) Reply frame received for 3\nI0506 21:16:28.987766 3646 log.go:172] (0xc0009d8840) (0xc000b08000) Create stream\nI0506 21:16:28.987801 3646 log.go:172] (0xc0009d8840) (0xc000b08000) Stream added, broadcasting: 5\nI0506 21:16:28.988634 3646 log.go:172] (0xc0009d8840) Reply frame received for 5\nI0506 21:16:29.054559 3646 log.go:172] (0xc0009d8840) Data frame received for 5\nI0506 21:16:29.054590 3646 log.go:172] (0xc000b08000) (5) Data frame handling\nI0506 21:16:29.054611 3646 log.go:172] (0xc000b08000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0506 21:16:29.230313 3646 log.go:172] (0xc0009d8840) Data frame received for 5\nI0506 21:16:29.230358 3646 log.go:172] (0xc000b08000) (5) Data frame handling\nI0506 21:16:29.230391 3646 log.go:172] (0xc000b08000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0506 21:16:29.230733 3646 log.go:172] (0xc0009d8840) Data frame received for 5\nI0506 21:16:29.230765 3646 log.go:172] (0xc000b08000) (5) Data frame handling\nI0506 21:16:29.231357 3646 log.go:172] (0xc0009d8840) Data frame received for 3\nI0506 21:16:29.231379 3646 log.go:172] (0xc00033e780) (3) Data frame handling\nI0506 21:16:29.232520 3646 log.go:172] (0xc0009d8840) Data frame received for 1\nI0506 21:16:29.232540 3646 log.go:172] (0xc000476d20) (1) Data frame handling\nI0506 21:16:29.232582 3646 log.go:172] (0xc000476d20) (1) Data frame sent\nI0506 21:16:29.232623 3646 log.go:172] (0xc0009d8840) (0xc000476d20) Stream removed, broadcasting: 1\nI0506 21:16:29.232674 3646 log.go:172] (0xc0009d8840) Go away received\nI0506 21:16:29.233033 3646 log.go:172] (0xc0009d8840) (0xc000476d20) Stream removed, broadcasting: 1\nI0506 21:16:29.233056 3646 log.go:172] (0xc0009d8840) (0xc00033e780) Stream removed, broadcasting: 3\nI0506 21:16:29.233067 3646 log.go:172] (0xc0009d8840) (0xc000b08000) Stream removed, broadcasting: 5\n" May 6 21:16:29.238: INFO: stdout: "" May 6 21:16:29.239: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7369 execpodr6sxg -- /bin/sh -x -c nc -zv -t -w 2 10.106.158.170 80' May 6 21:16:29.500: INFO: stderr: "I0506 21:16:29.426841 3666 log.go:172] (0xc00003af20) (0xc00050b540) Create stream\nI0506 21:16:29.426908 3666 log.go:172] (0xc00003af20) (0xc00050b540) Stream added, broadcasting: 1\nI0506 21:16:29.433761 3666 log.go:172] (0xc00003af20) Reply frame received for 1\nI0506 21:16:29.433898 3666 log.go:172] (0xc00003af20) (0xc0003c6e60) Create stream\nI0506 21:16:29.433980 3666 log.go:172] (0xc00003af20) (0xc0003c6e60) Stream added, broadcasting: 3\nI0506 21:16:29.435197 3666 log.go:172] (0xc00003af20) Reply frame received for 3\nI0506 21:16:29.435246 3666 log.go:172] (0xc00003af20) (0xc00035a140) Create stream\nI0506 21:16:29.435265 3666 log.go:172] (0xc00003af20) (0xc00035a140) Stream added, broadcasting: 5\nI0506 21:16:29.436078 3666 log.go:172] (0xc00003af20) Reply frame received for 5\nI0506 21:16:29.493376 3666 log.go:172] (0xc00003af20) Data frame received for 5\nI0506 21:16:29.493421 3666 log.go:172] (0xc00035a140) (5) Data frame handling\nI0506 21:16:29.493441 3666 log.go:172] (0xc00035a140) (5) Data frame sent\nI0506 21:16:29.493483 3666 log.go:172] (0xc00003af20) Data frame received for 5\nI0506 21:16:29.493501 3666 log.go:172] (0xc00035a140) (5) Data frame handling\nI0506 21:16:29.493513 3666 log.go:172] (0xc00003af20) Data frame received for 3\nI0506 21:16:29.493523 3666 log.go:172] (0xc0003c6e60) (3) Data frame handling\n+ nc -zv -t -w 2 10.106.158.170 80\nConnection to 10.106.158.170 80 port [tcp/http] succeeded!\nI0506 21:16:29.495286 3666 log.go:172] (0xc00003af20) Data frame received for 1\nI0506 21:16:29.495308 3666 log.go:172] (0xc00050b540) (1) Data frame handling\nI0506 21:16:29.495329 3666 log.go:172] (0xc00050b540) (1) Data frame sent\nI0506 21:16:29.495346 3666 log.go:172] (0xc00003af20) (0xc00050b540) Stream removed, broadcasting: 1\nI0506 21:16:29.495369 3666 log.go:172] (0xc00003af20) Go away received\nI0506 21:16:29.495762 3666 log.go:172] (0xc00003af20) (0xc00050b540) Stream removed, broadcasting: 1\nI0506 21:16:29.495789 3666 log.go:172] (0xc00003af20) (0xc0003c6e60) Stream removed, broadcasting: 3\nI0506 21:16:29.495799 3666 log.go:172] (0xc00003af20) (0xc00035a140) Stream removed, broadcasting: 5\n" May 6 21:16:29.500: INFO: stdout: "" May 6 21:16:29.500: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:16:29.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7369" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:17.302 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":288,"completed":261,"skipped":4202,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:16:29.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-583 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-583 May 6 21:16:29.667: INFO: Found 0 stateful pods, waiting for 1 May 6 21:16:39.679: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 6 21:16:39.739: INFO: Deleting all statefulset in ns statefulset-583 May 6 21:16:39.742: INFO: Scaling statefulset ss to 0 May 6 21:16:50.243: INFO: Waiting for statefulset status.replicas updated to 0 May 6 21:16:50.559: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:16:51.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-583" for this suite. • [SLOW TEST:21.818 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":288,"completed":262,"skipped":4203,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:16:51.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 21:16:54.625: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 21:16:56.996: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396614, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396614, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396614, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396614, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 21:17:00.038: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:17:10.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-110" for this suite. STEP: Destroying namespace "webhook-110-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.048 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":288,"completed":263,"skipped":4206,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:17:10.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 21:17:10.640: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-1356 I0506 21:17:10.674721 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1356, replica count: 1 I0506 21:17:11.725321 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 21:17:12.725517 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 21:17:13.725721 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 21:17:14.725962 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 21:17:14.861: INFO: Created: latency-svc-hf76n May 6 21:17:14.907: INFO: Got endpoints: latency-svc-hf76n [81.262873ms] May 6 21:17:14.939: INFO: Created: latency-svc-6xlz6 May 6 21:17:14.957: INFO: Got endpoints: latency-svc-6xlz6 [49.75701ms] May 6 21:17:14.975: INFO: Created: latency-svc-8jw9b May 6 21:17:14.995: INFO: Got endpoints: latency-svc-8jw9b [87.977908ms] May 6 21:17:15.056: INFO: Created: latency-svc-cl9bk May 6 21:17:15.059: INFO: Got endpoints: latency-svc-cl9bk [151.984954ms] May 6 21:17:15.150: INFO: Created: latency-svc-84q4b May 6 21:17:15.218: INFO: Got endpoints: latency-svc-84q4b [311.279776ms] May 6 21:17:15.233: INFO: Created: latency-svc-86djq May 6 21:17:15.247: INFO: Got endpoints: latency-svc-86djq [340.164472ms] May 6 21:17:15.275: INFO: Created: latency-svc-cnlt2 May 6 21:17:15.284: INFO: Got endpoints: latency-svc-cnlt2 [376.317775ms] May 6 21:17:15.310: INFO: Created: latency-svc-wgtn7 May 6 21:17:15.371: INFO: Got endpoints: latency-svc-wgtn7 [463.492458ms] May 6 21:17:15.379: INFO: Created: latency-svc-74g4n May 6 21:17:15.406: INFO: Got endpoints: latency-svc-74g4n [499.387144ms] May 6 21:17:15.407: INFO: Created: latency-svc-ntpbz May 6 21:17:15.437: INFO: Got endpoints: latency-svc-ntpbz [530.267089ms] May 6 21:17:15.532: INFO: Created: latency-svc-thfqw May 6 21:17:15.555: INFO: Got endpoints: latency-svc-thfqw [648.093111ms] May 6 21:17:15.593: INFO: Created: latency-svc-6nd7p May 6 21:17:15.622: INFO: Got endpoints: latency-svc-6nd7p [714.510235ms] May 6 21:17:15.737: INFO: Created: latency-svc-5xqw6 May 6 21:17:15.767: INFO: Got endpoints: latency-svc-5xqw6 [860.106697ms] May 6 21:17:15.803: INFO: Created: latency-svc-xf8ml May 6 21:17:15.844: INFO: Got endpoints: latency-svc-xf8ml [937.175558ms] May 6 21:17:15.862: INFO: Created: latency-svc-ktf9x May 6 21:17:15.881: INFO: Got endpoints: latency-svc-ktf9x [973.786002ms] May 6 21:17:15.985: INFO: Created: latency-svc-qrcn8 May 6 21:17:15.990: INFO: Got endpoints: latency-svc-qrcn8 [1.082528889s] May 6 21:17:16.019: INFO: Created: latency-svc-gmmtr May 6 21:17:16.044: INFO: Got endpoints: latency-svc-gmmtr [1.08719584s] May 6 21:17:16.170: INFO: Created: latency-svc-ml7hv May 6 21:17:16.193: INFO: Got endpoints: latency-svc-ml7hv [1.198036121s] May 6 21:17:16.253: INFO: Created: latency-svc-bfmtp May 6 21:17:16.328: INFO: Got endpoints: latency-svc-bfmtp [1.269040596s] May 6 21:17:16.500: INFO: Created: latency-svc-lmhh9 May 6 21:17:16.506: INFO: Got endpoints: latency-svc-lmhh9 [1.287562268s] May 6 21:17:16.638: INFO: Created: latency-svc-b5gg8 May 6 21:17:16.642: INFO: Got endpoints: latency-svc-b5gg8 [1.394903608s] May 6 21:17:16.715: INFO: Created: latency-svc-hb96t May 6 21:17:16.738: INFO: Got endpoints: latency-svc-hb96t [1.454542288s] May 6 21:17:16.819: INFO: Created: latency-svc-4cm4t May 6 21:17:16.831: INFO: Got endpoints: latency-svc-4cm4t [1.460108613s] May 6 21:17:16.877: INFO: Created: latency-svc-8jlwl May 6 21:17:16.931: INFO: Got endpoints: latency-svc-8jlwl [1.52481194s] May 6 21:17:16.956: INFO: Created: latency-svc-2vx28 May 6 21:17:16.971: INFO: Got endpoints: latency-svc-2vx28 [1.533104641s] May 6 21:17:16.997: INFO: Created: latency-svc-msh2z May 6 21:17:17.012: INFO: Got endpoints: latency-svc-msh2z [1.456457904s] May 6 21:17:17.094: INFO: Created: latency-svc-tjxbm May 6 21:17:17.096: INFO: Got endpoints: latency-svc-tjxbm [1.474534736s] May 6 21:17:17.236: INFO: Created: latency-svc-tslsk May 6 21:17:17.242: INFO: Got endpoints: latency-svc-tslsk [1.475044504s] May 6 21:17:17.300: INFO: Created: latency-svc-zv5s6 May 6 21:17:17.327: INFO: Got endpoints: latency-svc-zv5s6 [1.483047324s] May 6 21:17:17.375: INFO: Created: latency-svc-jgkxv May 6 21:17:17.397: INFO: Got endpoints: latency-svc-jgkxv [1.515704572s] May 6 21:17:17.441: INFO: Created: latency-svc-vjqd8 May 6 21:17:17.452: INFO: Got endpoints: latency-svc-vjqd8 [1.462029851s] May 6 21:17:17.543: INFO: Created: latency-svc-2k4xm May 6 21:17:17.567: INFO: Got endpoints: latency-svc-2k4xm [1.52294661s] May 6 21:17:17.691: INFO: Created: latency-svc-gstpl May 6 21:17:17.699: INFO: Got endpoints: latency-svc-gstpl [1.505452016s] May 6 21:17:17.729: INFO: Created: latency-svc-hk5xc May 6 21:17:17.746: INFO: Got endpoints: latency-svc-hk5xc [1.417935865s] May 6 21:17:17.843: INFO: Created: latency-svc-c2sft May 6 21:17:17.861: INFO: Got endpoints: latency-svc-c2sft [1.355069621s] May 6 21:17:17.915: INFO: Created: latency-svc-spmbg May 6 21:17:17.926: INFO: Got endpoints: latency-svc-spmbg [1.283899762s] May 6 21:17:17.970: INFO: Created: latency-svc-xkbld May 6 21:17:17.980: INFO: Got endpoints: latency-svc-xkbld [1.241280681s] May 6 21:17:18.035: INFO: Created: latency-svc-wg6zm May 6 21:17:18.052: INFO: Got endpoints: latency-svc-wg6zm [1.221230306s] May 6 21:17:18.131: INFO: Created: latency-svc-kvpcj May 6 21:17:18.179: INFO: Got endpoints: latency-svc-kvpcj [1.247558353s] May 6 21:17:18.257: INFO: Created: latency-svc-7k97t May 6 21:17:18.305: INFO: Got endpoints: latency-svc-7k97t [1.334660346s] May 6 21:17:18.330: INFO: Created: latency-svc-cxjl7 May 6 21:17:18.348: INFO: Got endpoints: latency-svc-cxjl7 [1.335722266s] May 6 21:17:18.420: INFO: Created: latency-svc-8kfjp May 6 21:17:18.437: INFO: Got endpoints: latency-svc-8kfjp [1.340611738s] May 6 21:17:18.590: INFO: Created: latency-svc-z6mqt May 6 21:17:18.637: INFO: Got endpoints: latency-svc-z6mqt [1.394398593s] May 6 21:17:18.638: INFO: Created: latency-svc-25hl8 May 6 21:17:18.666: INFO: Got endpoints: latency-svc-25hl8 [1.338443229s] May 6 21:17:18.750: INFO: Created: latency-svc-85w4x May 6 21:17:18.769: INFO: Got endpoints: latency-svc-85w4x [1.372771887s] May 6 21:17:18.865: INFO: Created: latency-svc-6fwln May 6 21:17:18.887: INFO: Got endpoints: latency-svc-6fwln [1.435314368s] May 6 21:17:18.949: INFO: Created: latency-svc-57wb2 May 6 21:17:18.998: INFO: Got endpoints: latency-svc-57wb2 [1.430854394s] May 6 21:17:19.063: INFO: Created: latency-svc-w7sgv May 6 21:17:19.087: INFO: Got endpoints: latency-svc-w7sgv [1.388381187s] May 6 21:17:19.177: INFO: Created: latency-svc-j7hpw May 6 21:17:19.183: INFO: Got endpoints: latency-svc-j7hpw [1.436967373s] May 6 21:17:19.205: INFO: Created: latency-svc-2pvg7 May 6 21:17:19.236: INFO: Got endpoints: latency-svc-2pvg7 [1.374487037s] May 6 21:17:19.303: INFO: Created: latency-svc-wmhh2 May 6 21:17:19.315: INFO: Got endpoints: latency-svc-wmhh2 [1.389242773s] May 6 21:17:19.339: INFO: Created: latency-svc-ltqtf May 6 21:17:19.357: INFO: Got endpoints: latency-svc-ltqtf [1.377348253s] May 6 21:17:19.373: INFO: Created: latency-svc-p5vl8 May 6 21:17:19.389: INFO: Got endpoints: latency-svc-p5vl8 [1.336463686s] May 6 21:17:19.446: INFO: Created: latency-svc-7bfbt May 6 21:17:19.461: INFO: Got endpoints: latency-svc-7bfbt [1.281506676s] May 6 21:17:19.488: INFO: Created: latency-svc-572jk May 6 21:17:19.524: INFO: Got endpoints: latency-svc-572jk [1.218972183s] May 6 21:17:19.567: INFO: Created: latency-svc-wpwmx May 6 21:17:19.606: INFO: Got endpoints: latency-svc-wpwmx [1.258054674s] May 6 21:17:19.686: INFO: Created: latency-svc-dq2x4 May 6 21:17:19.690: INFO: Got endpoints: latency-svc-dq2x4 [1.252945845s] May 6 21:17:19.746: INFO: Created: latency-svc-2zmjr May 6 21:17:19.758: INFO: Got endpoints: latency-svc-2zmjr [1.120879954s] May 6 21:17:19.823: INFO: Created: latency-svc-rgsdm May 6 21:17:19.841: INFO: Got endpoints: latency-svc-rgsdm [1.174986385s] May 6 21:17:19.884: INFO: Created: latency-svc-jqw52 May 6 21:17:19.902: INFO: Got endpoints: latency-svc-jqw52 [1.132054851s] May 6 21:17:19.955: INFO: Created: latency-svc-qngtm May 6 21:17:19.959: INFO: Got endpoints: latency-svc-qngtm [1.071861796s] May 6 21:17:20.016: INFO: Created: latency-svc-96gjx May 6 21:17:20.364: INFO: Got endpoints: latency-svc-96gjx [1.365753662s] May 6 21:17:20.436: INFO: Created: latency-svc-tf9jp May 6 21:17:20.590: INFO: Got endpoints: latency-svc-tf9jp [1.502523167s] May 6 21:17:20.592: INFO: Created: latency-svc-k5cq6 May 6 21:17:20.604: INFO: Got endpoints: latency-svc-k5cq6 [1.420276029s] May 6 21:17:20.647: INFO: Created: latency-svc-vscpf May 6 21:17:20.671: INFO: Got endpoints: latency-svc-vscpf [1.434810333s] May 6 21:17:20.733: INFO: Created: latency-svc-99mqv May 6 21:17:20.760: INFO: Got endpoints: latency-svc-99mqv [1.444706098s] May 6 21:17:20.820: INFO: Created: latency-svc-rtgsg May 6 21:17:20.906: INFO: Got endpoints: latency-svc-rtgsg [1.548844184s] May 6 21:17:20.934: INFO: Created: latency-svc-sgn9m May 6 21:17:20.947: INFO: Got endpoints: latency-svc-sgn9m [1.558798995s] May 6 21:17:20.991: INFO: Created: latency-svc-r4rvz May 6 21:17:20.995: INFO: Got endpoints: latency-svc-r4rvz [1.534165272s] May 6 21:17:21.061: INFO: Created: latency-svc-9rnqx May 6 21:17:21.141: INFO: Got endpoints: latency-svc-9rnqx [1.616228966s] May 6 21:17:21.142: INFO: Created: latency-svc-dckt4 May 6 21:17:21.151: INFO: Got endpoints: latency-svc-dckt4 [1.545410968s] May 6 21:17:21.181: INFO: Created: latency-svc-v47cr May 6 21:17:21.194: INFO: Got endpoints: latency-svc-v47cr [1.503674097s] May 6 21:17:21.218: INFO: Created: latency-svc-jl55v May 6 21:17:21.230: INFO: Got endpoints: latency-svc-jl55v [1.471818798s] May 6 21:17:21.290: INFO: Created: latency-svc-ng5h8 May 6 21:17:21.324: INFO: Got endpoints: latency-svc-ng5h8 [1.483484433s] May 6 21:17:21.378: INFO: Created: latency-svc-9wntw May 6 21:17:21.422: INFO: Got endpoints: latency-svc-9wntw [1.520649002s] May 6 21:17:21.480: INFO: Created: latency-svc-f65gk May 6 21:17:21.501: INFO: Got endpoints: latency-svc-f65gk [1.541418225s] May 6 21:17:21.607: INFO: Created: latency-svc-4px49 May 6 21:17:21.692: INFO: Got endpoints: latency-svc-4px49 [1.328503774s] May 6 21:17:22.081: INFO: Created: latency-svc-bzmhd May 6 21:17:22.255: INFO: Got endpoints: latency-svc-bzmhd [1.665588037s] May 6 21:17:22.303: INFO: Created: latency-svc-x2j5f May 6 21:17:22.405: INFO: Got endpoints: latency-svc-x2j5f [1.801478937s] May 6 21:17:22.454: INFO: Created: latency-svc-wth6z May 6 21:17:22.496: INFO: Got endpoints: latency-svc-wth6z [1.825149113s] May 6 21:17:22.591: INFO: Created: latency-svc-2bk4w May 6 21:17:22.604: INFO: Got endpoints: latency-svc-2bk4w [1.843851932s] May 6 21:17:22.646: INFO: Created: latency-svc-7kz54 May 6 21:17:22.670: INFO: Got endpoints: latency-svc-7kz54 [1.764159687s] May 6 21:17:22.782: INFO: Created: latency-svc-jmlqm May 6 21:17:22.790: INFO: Got endpoints: latency-svc-jmlqm [1.842234992s] May 6 21:17:22.856: INFO: Created: latency-svc-bkc58 May 6 21:17:22.880: INFO: Got endpoints: latency-svc-bkc58 [1.885634119s] May 6 21:17:22.967: INFO: Created: latency-svc-knwfc May 6 21:17:22.977: INFO: Got endpoints: latency-svc-knwfc [1.836180044s] May 6 21:17:23.018: INFO: Created: latency-svc-54hfs May 6 21:17:23.146: INFO: Got endpoints: latency-svc-54hfs [1.994932846s] May 6 21:17:23.170: INFO: Created: latency-svc-xb4lk May 6 21:17:23.218: INFO: Got endpoints: latency-svc-xb4lk [2.024204568s] May 6 21:17:24.399: INFO: Created: latency-svc-47tz5 May 6 21:17:25.272: INFO: Got endpoints: latency-svc-47tz5 [4.042349574s] May 6 21:17:25.276: INFO: Created: latency-svc-zxv2l May 6 21:17:25.320: INFO: Got endpoints: latency-svc-zxv2l [3.995261288s] May 6 21:17:26.324: INFO: Created: latency-svc-2vtwl May 6 21:17:26.324: INFO: Got endpoints: latency-svc-2vtwl [4.902129733s] May 6 21:17:26.771: INFO: Created: latency-svc-6q2ct May 6 21:17:26.800: INFO: Got endpoints: latency-svc-6q2ct [5.299149366s] May 6 21:17:27.007: INFO: Created: latency-svc-2wnw9 May 6 21:17:27.009: INFO: Got endpoints: latency-svc-2wnw9 [5.316414951s] May 6 21:17:27.335: INFO: Created: latency-svc-kgblg May 6 21:17:27.660: INFO: Got endpoints: latency-svc-kgblg [5.404702563s] May 6 21:17:27.871: INFO: Created: latency-svc-bl8ph May 6 21:17:27.902: INFO: Got endpoints: latency-svc-bl8ph [5.496777519s] May 6 21:17:27.943: INFO: Created: latency-svc-bzsv8 May 6 21:17:28.680: INFO: Got endpoints: latency-svc-bzsv8 [6.184391132s] May 6 21:17:28.742: INFO: Created: latency-svc-lzn9d May 6 21:17:28.967: INFO: Got endpoints: latency-svc-lzn9d [6.362577912s] May 6 21:17:29.309: INFO: Created: latency-svc-ng8v6 May 6 21:17:29.692: INFO: Got endpoints: latency-svc-ng8v6 [7.021661055s] May 6 21:17:29.696: INFO: Created: latency-svc-mtxmn May 6 21:17:29.784: INFO: Got endpoints: latency-svc-mtxmn [6.994189266s] May 6 21:17:30.086: INFO: Created: latency-svc-ns8bx May 6 21:17:30.297: INFO: Got endpoints: latency-svc-ns8bx [7.416469515s] May 6 21:17:30.303: INFO: Created: latency-svc-ld24m May 6 21:17:30.337: INFO: Got endpoints: latency-svc-ld24m [7.360265106s] May 6 21:17:30.592: INFO: Created: latency-svc-kx8n7 May 6 21:17:30.986: INFO: Got endpoints: latency-svc-kx8n7 [7.839468983s] May 6 21:17:31.033: INFO: Created: latency-svc-rxqp5 May 6 21:17:31.049: INFO: Got endpoints: latency-svc-rxqp5 [7.831253544s] May 6 21:17:31.143: INFO: Created: latency-svc-lgdkr May 6 21:17:31.145: INFO: Got endpoints: latency-svc-lgdkr [5.873132231s] May 6 21:17:31.207: INFO: Created: latency-svc-grnmv May 6 21:17:31.229: INFO: Got endpoints: latency-svc-grnmv [5.909174634s] May 6 21:17:31.363: INFO: Created: latency-svc-52rl8 May 6 21:17:31.442: INFO: Got endpoints: latency-svc-52rl8 [5.117362296s] May 6 21:17:31.442: INFO: Created: latency-svc-rs4t6 May 6 21:17:31.590: INFO: Got endpoints: latency-svc-rs4t6 [4.790599038s] May 6 21:17:31.601: INFO: Created: latency-svc-tlzbh May 6 21:17:31.642: INFO: Got endpoints: latency-svc-tlzbh [4.633273112s] May 6 21:17:32.037: INFO: Created: latency-svc-7dw5m May 6 21:17:32.088: INFO: Got endpoints: latency-svc-7dw5m [4.427429387s] May 6 21:17:32.411: INFO: Created: latency-svc-qzb64 May 6 21:17:32.643: INFO: Got endpoints: latency-svc-qzb64 [4.741055769s] May 6 21:17:32.673: INFO: Created: latency-svc-l9ptx May 6 21:17:33.195: INFO: Got endpoints: latency-svc-l9ptx [4.51481097s] May 6 21:17:33.672: INFO: Created: latency-svc-6fb8q May 6 21:17:33.688: INFO: Got endpoints: latency-svc-6fb8q [4.721394806s] May 6 21:17:34.177: INFO: Created: latency-svc-5k9pz May 6 21:17:34.710: INFO: Got endpoints: latency-svc-5k9pz [5.018191898s] May 6 21:17:34.986: INFO: Created: latency-svc-2gnnn May 6 21:17:35.304: INFO: Got endpoints: latency-svc-2gnnn [5.519728979s] May 6 21:17:35.306: INFO: Created: latency-svc-jxznh May 6 21:17:35.563: INFO: Got endpoints: latency-svc-jxznh [5.265660037s] May 6 21:17:35.635: INFO: Created: latency-svc-n48dx May 6 21:17:35.782: INFO: Got endpoints: latency-svc-n48dx [5.444197941s] May 6 21:17:35.863: INFO: Created: latency-svc-zgfsp May 6 21:17:36.178: INFO: Got endpoints: latency-svc-zgfsp [5.191757139s] May 6 21:17:36.465: INFO: Created: latency-svc-hcksj May 6 21:17:36.519: INFO: Got endpoints: latency-svc-hcksj [5.470187517s] May 6 21:17:36.522: INFO: Created: latency-svc-z67jw May 6 21:17:36.552: INFO: Got endpoints: latency-svc-z67jw [5.406691698s] May 6 21:17:37.017: INFO: Created: latency-svc-rncl8 May 6 21:17:37.416: INFO: Got endpoints: latency-svc-rncl8 [6.187380082s] May 6 21:17:37.424: INFO: Created: latency-svc-n29xn May 6 21:17:37.446: INFO: Got endpoints: latency-svc-n29xn [6.003763513s] May 6 21:17:37.725: INFO: Created: latency-svc-ksqf7 May 6 21:17:37.763: INFO: Got endpoints: latency-svc-ksqf7 [6.172665329s] May 6 21:17:38.429: INFO: Created: latency-svc-vtrdd May 6 21:17:38.464: INFO: Got endpoints: latency-svc-vtrdd [6.821635578s] May 6 21:17:38.523: INFO: Created: latency-svc-vcnsc May 6 21:17:38.890: INFO: Got endpoints: latency-svc-vcnsc [6.801948863s] May 6 21:17:38.937: INFO: Created: latency-svc-bkzsl May 6 21:17:38.958: INFO: Got endpoints: latency-svc-bkzsl [6.31473351s] May 6 21:17:39.411: INFO: Created: latency-svc-cvdnl May 6 21:17:39.447: INFO: Got endpoints: latency-svc-cvdnl [6.252297649s] May 6 21:17:39.449: INFO: Created: latency-svc-d5ngc May 6 21:17:39.478: INFO: Got endpoints: latency-svc-d5ngc [5.789638307s] May 6 21:17:39.825: INFO: Created: latency-svc-rgbqj May 6 21:17:39.830: INFO: Got endpoints: latency-svc-rgbqj [5.120195225s] May 6 21:17:39.911: INFO: Created: latency-svc-fx666 May 6 21:17:40.082: INFO: Got endpoints: latency-svc-fx666 [4.777806744s] May 6 21:17:40.574: INFO: Created: latency-svc-wsxmf May 6 21:17:40.826: INFO: Got endpoints: latency-svc-wsxmf [5.262741856s] May 6 21:17:40.868: INFO: Created: latency-svc-qp567 May 6 21:17:41.286: INFO: Created: latency-svc-t9dt6 May 6 21:17:41.286: INFO: Got endpoints: latency-svc-qp567 [5.503986889s] May 6 21:17:41.369: INFO: Got endpoints: latency-svc-t9dt6 [5.191770161s] May 6 21:17:41.560: INFO: Created: latency-svc-4zpj7 May 6 21:17:41.610: INFO: Got endpoints: latency-svc-4zpj7 [5.090117551s] May 6 21:17:41.809: INFO: Created: latency-svc-ktclf May 6 21:17:41.813: INFO: Got endpoints: latency-svc-ktclf [5.260906358s] May 6 21:17:42.189: INFO: Created: latency-svc-lvbgh May 6 21:17:42.193: INFO: Got endpoints: latency-svc-lvbgh [4.776957527s] May 6 21:17:42.270: INFO: Created: latency-svc-4xjzq May 6 21:17:42.427: INFO: Got endpoints: latency-svc-4xjzq [4.980888872s] May 6 21:17:42.427: INFO: Created: latency-svc-x84c7 May 6 21:17:42.432: INFO: Got endpoints: latency-svc-x84c7 [4.668343223s] May 6 21:17:42.467: INFO: Created: latency-svc-4m6gz May 6 21:17:42.510: INFO: Got endpoints: latency-svc-4m6gz [4.04570709s] May 6 21:17:42.584: INFO: Created: latency-svc-pc49l May 6 21:17:42.606: INFO: Got endpoints: latency-svc-pc49l [3.716681187s] May 6 21:17:42.739: INFO: Created: latency-svc-n745f May 6 21:17:42.744: INFO: Got endpoints: latency-svc-n745f [3.785621753s] May 6 21:17:42.798: INFO: Created: latency-svc-5qvgp May 6 21:17:42.828: INFO: Got endpoints: latency-svc-5qvgp [3.380528301s] May 6 21:17:42.884: INFO: Created: latency-svc-9qz92 May 6 21:17:42.886: INFO: Got endpoints: latency-svc-9qz92 [3.40832885s] May 6 21:17:42.960: INFO: Created: latency-svc-zrr2l May 6 21:17:42.979: INFO: Got endpoints: latency-svc-zrr2l [3.148582958s] May 6 21:17:43.021: INFO: Created: latency-svc-qmvmp May 6 21:17:43.028: INFO: Got endpoints: latency-svc-qmvmp [2.946139292s] May 6 21:17:43.249: INFO: Created: latency-svc-rq8cr May 6 21:17:43.303: INFO: Got endpoints: latency-svc-rq8cr [2.477346501s] May 6 21:17:43.458: INFO: Created: latency-svc-2s49z May 6 21:17:43.461: INFO: Got endpoints: latency-svc-2s49z [2.174721034s] May 6 21:17:43.525: INFO: Created: latency-svc-2d9lq May 6 21:17:43.541: INFO: Got endpoints: latency-svc-2d9lq [2.171752076s] May 6 21:17:43.608: INFO: Created: latency-svc-ds2lz May 6 21:17:43.631: INFO: Got endpoints: latency-svc-ds2lz [2.021570786s] May 6 21:17:43.662: INFO: Created: latency-svc-h6n78 May 6 21:17:43.671: INFO: Got endpoints: latency-svc-h6n78 [1.857679471s] May 6 21:17:43.752: INFO: Created: latency-svc-gc62g May 6 21:17:43.794: INFO: Got endpoints: latency-svc-gc62g [1.600196592s] May 6 21:17:43.830: INFO: Created: latency-svc-pcmwm May 6 21:17:43.847: INFO: Got endpoints: latency-svc-pcmwm [1.420789096s] May 6 21:17:43.919: INFO: Created: latency-svc-rfzxf May 6 21:17:43.931: INFO: Got endpoints: latency-svc-rfzxf [1.499744741s] May 6 21:17:43.961: INFO: Created: latency-svc-bds8m May 6 21:17:43.978: INFO: Got endpoints: latency-svc-bds8m [1.468240378s] May 6 21:17:43.998: INFO: Created: latency-svc-qvgmv May 6 21:17:44.063: INFO: Got endpoints: latency-svc-qvgmv [1.456318821s] May 6 21:17:44.112: INFO: Created: latency-svc-sw6rv May 6 21:17:44.140: INFO: Got endpoints: latency-svc-sw6rv [1.396604968s] May 6 21:17:44.225: INFO: Created: latency-svc-2bxsb May 6 21:17:44.243: INFO: Got endpoints: latency-svc-2bxsb [1.41441589s] May 6 21:17:44.291: INFO: Created: latency-svc-s7jjd May 6 21:17:44.303: INFO: Got endpoints: latency-svc-s7jjd [1.416757856s] May 6 21:17:44.368: INFO: Created: latency-svc-cxcjq May 6 21:17:44.375: INFO: Got endpoints: latency-svc-cxcjq [1.395634248s] May 6 21:17:44.430: INFO: Created: latency-svc-5lmhb May 6 21:17:44.448: INFO: Got endpoints: latency-svc-5lmhb [1.419754818s] May 6 21:17:44.549: INFO: Created: latency-svc-4l268 May 6 21:17:44.592: INFO: Got endpoints: latency-svc-4l268 [1.289198168s] May 6 21:17:44.640: INFO: Created: latency-svc-b7xz8 May 6 21:17:44.716: INFO: Got endpoints: latency-svc-b7xz8 [1.254958134s] May 6 21:17:44.748: INFO: Created: latency-svc-nzllv May 6 21:17:44.766: INFO: Got endpoints: latency-svc-nzllv [1.225031022s] May 6 21:17:44.784: INFO: Created: latency-svc-rqsqm May 6 21:17:44.802: INFO: Got endpoints: latency-svc-rqsqm [1.170955667s] May 6 21:17:44.871: INFO: Created: latency-svc-rzgds May 6 21:17:44.880: INFO: Got endpoints: latency-svc-rzgds [1.209356364s] May 6 21:17:44.904: INFO: Created: latency-svc-cct8x May 6 21:17:44.934: INFO: Got endpoints: latency-svc-cct8x [1.140398696s] May 6 21:17:44.964: INFO: Created: latency-svc-8glw4 May 6 21:17:45.033: INFO: Got endpoints: latency-svc-8glw4 [1.185962896s] May 6 21:17:45.035: INFO: Created: latency-svc-lkhfb May 6 21:17:45.049: INFO: Got endpoints: latency-svc-lkhfb [1.118101242s] May 6 21:17:45.084: INFO: Created: latency-svc-h4zmm May 6 21:17:45.114: INFO: Got endpoints: latency-svc-h4zmm [1.135785032s] May 6 21:17:45.171: INFO: Created: latency-svc-nbmld May 6 21:17:45.204: INFO: Got endpoints: latency-svc-nbmld [1.140906935s] May 6 21:17:45.205: INFO: Created: latency-svc-9ggt7 May 6 21:17:45.252: INFO: Got endpoints: latency-svc-9ggt7 [1.111283269s] May 6 21:17:45.320: INFO: Created: latency-svc-jldnf May 6 21:17:45.324: INFO: Got endpoints: latency-svc-jldnf [1.081584573s] May 6 21:17:45.354: INFO: Created: latency-svc-jjrl5 May 6 21:17:45.379: INFO: Got endpoints: latency-svc-jjrl5 [1.076172945s] May 6 21:17:45.409: INFO: Created: latency-svc-qbr7k May 6 21:17:45.482: INFO: Got endpoints: latency-svc-qbr7k [1.106889291s] May 6 21:17:45.523: INFO: Created: latency-svc-hllcw May 6 21:17:45.541: INFO: Got endpoints: latency-svc-hllcw [1.093331051s] May 6 21:17:45.570: INFO: Created: latency-svc-mgpvw May 6 21:17:45.650: INFO: Got endpoints: latency-svc-mgpvw [1.057211652s] May 6 21:17:45.678: INFO: Created: latency-svc-wrvdb May 6 21:17:45.691: INFO: Got endpoints: latency-svc-wrvdb [975.510002ms] May 6 21:17:45.847: INFO: Created: latency-svc-jq9th May 6 21:17:45.860: INFO: Got endpoints: latency-svc-jq9th [1.093380824s] May 6 21:17:45.894: INFO: Created: latency-svc-hv9sx May 6 21:17:45.925: INFO: Got endpoints: latency-svc-hv9sx [1.12310073s] May 6 21:17:46.015: INFO: Created: latency-svc-s5whg May 6 21:17:46.063: INFO: Created: latency-svc-tztmw May 6 21:17:46.063: INFO: Got endpoints: latency-svc-s5whg [1.182366998s] May 6 21:17:46.092: INFO: Got endpoints: latency-svc-tztmw [1.158218995s] May 6 21:17:46.159: INFO: Created: latency-svc-gj22b May 6 21:17:46.176: INFO: Got endpoints: latency-svc-gj22b [1.142730025s] May 6 21:17:46.243: INFO: Created: latency-svc-kbdhf May 6 21:17:46.428: INFO: Got endpoints: latency-svc-kbdhf [1.378933992s] May 6 21:17:46.638: INFO: Created: latency-svc-9xs9b May 6 21:17:46.729: INFO: Got endpoints: latency-svc-9xs9b [1.615587303s] May 6 21:17:46.897: INFO: Created: latency-svc-jrgbn May 6 21:17:46.928: INFO: Got endpoints: latency-svc-jrgbn [1.724095884s] May 6 21:17:46.957: INFO: Created: latency-svc-xznlp May 6 21:17:47.167: INFO: Got endpoints: latency-svc-xznlp [1.91529211s] May 6 21:17:47.352: INFO: Created: latency-svc-pdgck May 6 21:17:47.354: INFO: Got endpoints: latency-svc-pdgck [2.029911494s] May 6 21:17:47.519: INFO: Created: latency-svc-vj7xz May 6 21:17:47.563: INFO: Got endpoints: latency-svc-vj7xz [2.183827059s] May 6 21:17:47.606: INFO: Created: latency-svc-w8ds9 May 6 21:17:47.703: INFO: Got endpoints: latency-svc-w8ds9 [2.221204432s] May 6 21:17:47.719: INFO: Created: latency-svc-6dk2l May 6 21:17:47.757: INFO: Got endpoints: latency-svc-6dk2l [2.216000221s] May 6 21:17:47.780: INFO: Created: latency-svc-n7jz4 May 6 21:17:47.795: INFO: Got endpoints: latency-svc-n7jz4 [2.144902679s] May 6 21:17:47.862: INFO: Created: latency-svc-bgxps May 6 21:17:47.917: INFO: Created: latency-svc-29dph May 6 21:17:47.918: INFO: Got endpoints: latency-svc-bgxps [2.226387202s] May 6 21:17:47.935: INFO: Got endpoints: latency-svc-29dph [2.075376496s] May 6 21:17:48.021: INFO: Created: latency-svc-47ck5 May 6 21:17:48.024: INFO: Got endpoints: latency-svc-47ck5 [2.098879458s] May 6 21:17:48.086: INFO: Created: latency-svc-44qn4 May 6 21:17:48.113: INFO: Got endpoints: latency-svc-44qn4 [2.050494621s] May 6 21:17:48.177: INFO: Created: latency-svc-wnnjd May 6 21:17:48.186: INFO: Got endpoints: latency-svc-wnnjd [2.093578506s] May 6 21:17:48.212: INFO: Created: latency-svc-49swq May 6 21:17:48.242: INFO: Got endpoints: latency-svc-49swq [2.065675175s] May 6 21:17:48.347: INFO: Created: latency-svc-txnhm May 6 21:17:48.360: INFO: Got endpoints: latency-svc-txnhm [1.931391115s] May 6 21:17:48.391: INFO: Created: latency-svc-q5mhh May 6 21:17:48.421: INFO: Got endpoints: latency-svc-q5mhh [1.691690642s] May 6 21:17:48.502: INFO: Created: latency-svc-fr4m8 May 6 21:17:48.504: INFO: Got endpoints: latency-svc-fr4m8 [1.576130043s] May 6 21:17:48.559: INFO: Created: latency-svc-4ggvc May 6 21:17:48.570: INFO: Got endpoints: latency-svc-4ggvc [1.402848001s] May 6 21:17:48.595: INFO: Created: latency-svc-6mj6j May 6 21:17:48.655: INFO: Got endpoints: latency-svc-6mj6j [1.300985371s] May 6 21:17:48.667: INFO: Created: latency-svc-4s78p May 6 21:17:48.685: INFO: Got endpoints: latency-svc-4s78p [1.122223649s] May 6 21:17:48.686: INFO: Latencies: [49.75701ms 87.977908ms 151.984954ms 311.279776ms 340.164472ms 376.317775ms 463.492458ms 499.387144ms 530.267089ms 648.093111ms 714.510235ms 860.106697ms 937.175558ms 973.786002ms 975.510002ms 1.057211652s 1.071861796s 1.076172945s 1.081584573s 1.082528889s 1.08719584s 1.093331051s 1.093380824s 1.106889291s 1.111283269s 1.118101242s 1.120879954s 1.122223649s 1.12310073s 1.132054851s 1.135785032s 1.140398696s 1.140906935s 1.142730025s 1.158218995s 1.170955667s 1.174986385s 1.182366998s 1.185962896s 1.198036121s 1.209356364s 1.218972183s 1.221230306s 1.225031022s 1.241280681s 1.247558353s 1.252945845s 1.254958134s 1.258054674s 1.269040596s 1.281506676s 1.283899762s 1.287562268s 1.289198168s 1.300985371s 1.328503774s 1.334660346s 1.335722266s 1.336463686s 1.338443229s 1.340611738s 1.355069621s 1.365753662s 1.372771887s 1.374487037s 1.377348253s 1.378933992s 1.388381187s 1.389242773s 1.394398593s 1.394903608s 1.395634248s 1.396604968s 1.402848001s 1.41441589s 1.416757856s 1.417935865s 1.419754818s 1.420276029s 1.420789096s 1.430854394s 1.434810333s 1.435314368s 1.436967373s 1.444706098s 1.454542288s 1.456318821s 1.456457904s 1.460108613s 1.462029851s 1.468240378s 1.471818798s 1.474534736s 1.475044504s 1.483047324s 1.483484433s 1.499744741s 1.502523167s 1.503674097s 1.505452016s 1.515704572s 1.520649002s 1.52294661s 1.52481194s 1.533104641s 1.534165272s 1.541418225s 1.545410968s 1.548844184s 1.558798995s 1.576130043s 1.600196592s 1.615587303s 1.616228966s 1.665588037s 1.691690642s 1.724095884s 1.764159687s 1.801478937s 1.825149113s 1.836180044s 1.842234992s 1.843851932s 1.857679471s 1.885634119s 1.91529211s 1.931391115s 1.994932846s 2.021570786s 2.024204568s 2.029911494s 2.050494621s 2.065675175s 2.075376496s 2.093578506s 2.098879458s 2.144902679s 2.171752076s 2.174721034s 2.183827059s 2.216000221s 2.221204432s 2.226387202s 2.477346501s 2.946139292s 3.148582958s 3.380528301s 3.40832885s 3.716681187s 3.785621753s 3.995261288s 4.042349574s 4.04570709s 4.427429387s 4.51481097s 4.633273112s 4.668343223s 4.721394806s 4.741055769s 4.776957527s 4.777806744s 4.790599038s 4.902129733s 4.980888872s 5.018191898s 5.090117551s 5.117362296s 5.120195225s 5.191757139s 5.191770161s 5.260906358s 5.262741856s 5.265660037s 5.299149366s 5.316414951s 5.404702563s 5.406691698s 5.444197941s 5.470187517s 5.496777519s 5.503986889s 5.519728979s 5.789638307s 5.873132231s 5.909174634s 6.003763513s 6.172665329s 6.184391132s 6.187380082s 6.252297649s 6.31473351s 6.362577912s 6.801948863s 6.821635578s 6.994189266s 7.021661055s 7.360265106s 7.416469515s 7.831253544s 7.839468983s] May 6 21:17:48.686: INFO: 50 %ile: 1.515704572s May 6 21:17:48.686: INFO: 90 %ile: 5.503986889s May 6 21:17:48.686: INFO: 99 %ile: 7.831253544s May 6 21:17:48.686: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:17:48.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1356" for this suite. • [SLOW TEST:38.300 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":288,"completed":264,"skipped":4255,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:17:48.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 6 21:17:53.518: INFO: Successfully updated pod "pod-update-activedeadlineseconds-31a7b3c3-332c-4edc-9473-c1fcd52392ef" May 6 21:17:53.518: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-31a7b3c3-332c-4edc-9473-c1fcd52392ef" in namespace "pods-4066" to be "terminated due to deadline exceeded" May 6 21:17:53.557: INFO: Pod "pod-update-activedeadlineseconds-31a7b3c3-332c-4edc-9473-c1fcd52392ef": Phase="Running", Reason="", readiness=true. Elapsed: 38.952368ms May 6 21:17:55.595: INFO: Pod "pod-update-activedeadlineseconds-31a7b3c3-332c-4edc-9473-c1fcd52392ef": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.076785284s May 6 21:17:55.595: INFO: Pod "pod-update-activedeadlineseconds-31a7b3c3-332c-4edc-9473-c1fcd52392ef" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:17:55.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4066" for this suite. • [SLOW TEST:6.915 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":288,"completed":265,"skipped":4264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:17:55.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 21:17:58.166: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 21:18:00.445: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396678, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396678, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396678, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396677, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 21:18:03.255: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396678, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396678, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396678, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396677, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 21:18:04.956: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396678, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396678, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396678, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396677, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 21:18:07.903: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 21:18:07.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8916-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:18:09.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6964" for this suite. STEP: Destroying namespace "webhook-6964-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.810 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":288,"completed":266,"skipped":4302,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:18:09.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-9fe58e37-a2cb-4822-a4d7-fa35e3a4cb37 STEP: Creating a pod to test consume configMaps May 6 21:18:09.616: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f24fe82b-51f7-40e2-9a8a-57204405daad" in namespace "projected-3961" to be "Succeeded or Failed" May 6 21:18:09.698: INFO: Pod "pod-projected-configmaps-f24fe82b-51f7-40e2-9a8a-57204405daad": Phase="Pending", Reason="", readiness=false. Elapsed: 81.694304ms May 6 21:18:11.752: INFO: Pod "pod-projected-configmaps-f24fe82b-51f7-40e2-9a8a-57204405daad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136504472s May 6 21:18:14.059: INFO: Pod "pod-projected-configmaps-f24fe82b-51f7-40e2-9a8a-57204405daad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.442687285s May 6 21:18:16.075: INFO: Pod "pod-projected-configmaps-f24fe82b-51f7-40e2-9a8a-57204405daad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.459071291s STEP: Saw pod success May 6 21:18:16.075: INFO: Pod "pod-projected-configmaps-f24fe82b-51f7-40e2-9a8a-57204405daad" satisfied condition "Succeeded or Failed" May 6 21:18:16.093: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-f24fe82b-51f7-40e2-9a8a-57204405daad container projected-configmap-volume-test: STEP: delete the pod May 6 21:18:16.363: INFO: Waiting for pod pod-projected-configmaps-f24fe82b-51f7-40e2-9a8a-57204405daad to disappear May 6 21:18:16.386: INFO: Pod pod-projected-configmaps-f24fe82b-51f7-40e2-9a8a-57204405daad no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:18:16.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3961" for this suite. • [SLOW TEST:7.023 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":267,"skipped":4470,"failed":0} [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:18:16.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:18:16.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5312" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":288,"completed":268,"skipped":4470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:18:16.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:18:17.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8052" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":288,"completed":269,"skipped":4497,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:18:17.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 21:18:17.211: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6703' May 6 21:18:17.543: INFO: stderr: "" May 6 21:18:17.543: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 6 21:18:17.543: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6703' May 6 21:18:17.995: INFO: stderr: "" May 6 21:18:17.995: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 6 21:18:19.111: INFO: Selector matched 1 pods for map[app:agnhost] May 6 21:18:19.111: INFO: Found 0 / 1 May 6 21:18:20.016: INFO: Selector matched 1 pods for map[app:agnhost] May 6 21:18:20.016: INFO: Found 0 / 1 May 6 21:18:21.069: INFO: Selector matched 1 pods for map[app:agnhost] May 6 21:18:21.069: INFO: Found 0 / 1 May 6 21:18:21.998: INFO: Selector matched 1 pods for map[app:agnhost] May 6 21:18:21.998: INFO: Found 1 / 1 May 6 21:18:21.998: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 6 21:18:22.022: INFO: Selector matched 1 pods for map[app:agnhost] May 6 21:18:22.022: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 6 21:18:22.022: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-fq9jn --namespace=kubectl-6703' May 6 21:18:22.242: INFO: stderr: "" May 6 21:18:22.242: INFO: stdout: "Name: agnhost-master-fq9jn\nNamespace: kubectl-6703\nPriority: 0\nNode: latest-worker2/172.17.0.12\nStart Time: Wed, 06 May 2020 21:18:17 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.52\nIPs:\n IP: 10.244.2.52\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://2926a89d96f92e1a7eed49d049d429b11832f1b37ead0019594e94c853e32b85\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 06 May 2020 21:18:21 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-n2nng (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-n2nng:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-n2nng\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-6703/agnhost-master-fq9jn to latest-worker2\n Normal Pulled 3s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 2s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 1s kubelet, latest-worker2 Started container agnhost-master\n" May 6 21:18:22.242: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-6703' May 6 21:18:22.440: INFO: stderr: "" May 6 21:18:22.440: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-6703\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-fq9jn\n" May 6 21:18:22.440: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-6703' May 6 21:18:22.686: INFO: stderr: "" May 6 21:18:22.686: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-6703\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.101.34.58\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: \nSession Affinity: None\nEvents: \n" May 6 21:18:22.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' May 6 21:18:23.016: INFO: stderr: "" May 6 21:18:23.016: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Wed, 06 May 2020 21:18:21 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 06 May 2020 21:14:54 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 06 May 2020 21:14:54 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 06 May 2020 21:14:54 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 06 May 2020 21:14:54 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 7d11h\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 7d11h\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7d11h\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 7d11h\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 7d11h\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 7d11h\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7d11h\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 7d11h\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7d11h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 6 21:18:23.016: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-6703' May 6 21:18:23.195: INFO: stderr: "" May 6 21:18:23.195: INFO: stdout: "Name: kubectl-6703\nLabels: e2e-framework=kubectl\n e2e-run=007d0f8a-11d6-40be-a500-64001ef56cc7\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:18:23.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6703" for this suite. • [SLOW TEST:6.139 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1083 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":288,"completed":270,"skipped":4504,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:18:23.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 21:18:23.354: INFO: Creating ReplicaSet my-hostname-basic-6d30d535-dc80-4b15-8145-7a6bdb200549 May 6 21:18:23.440: INFO: Pod name my-hostname-basic-6d30d535-dc80-4b15-8145-7a6bdb200549: Found 0 pods out of 1 May 6 21:18:28.466: INFO: Pod name my-hostname-basic-6d30d535-dc80-4b15-8145-7a6bdb200549: Found 1 pods out of 1 May 6 21:18:28.466: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-6d30d535-dc80-4b15-8145-7a6bdb200549" is running May 6 21:18:28.476: INFO: Pod "my-hostname-basic-6d30d535-dc80-4b15-8145-7a6bdb200549-nqgh4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 21:18:23 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 21:18:27 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 21:18:27 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 21:18:23 +0000 UTC Reason: Message:}]) May 6 21:18:28.477: INFO: Trying to dial the pod May 6 21:18:33.515: INFO: Controller my-hostname-basic-6d30d535-dc80-4b15-8145-7a6bdb200549: Got expected result from replica 1 [my-hostname-basic-6d30d535-dc80-4b15-8145-7a6bdb200549-nqgh4]: "my-hostname-basic-6d30d535-dc80-4b15-8145-7a6bdb200549-nqgh4", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:18:33.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6968" for this suite. • [SLOW TEST:10.327 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":271,"skipped":4539,"failed":0} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:18:33.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-2827cbf3-cc0a-4ef5-b62c-675cc1133196 in namespace container-probe-6796 May 6 21:18:37.723: INFO: Started pod busybox-2827cbf3-cc0a-4ef5-b62c-675cc1133196 in namespace container-probe-6796 STEP: checking the pod's current state and verifying that restartCount is present May 6 21:18:37.769: INFO: Initial restart count of pod busybox-2827cbf3-cc0a-4ef5-b62c-675cc1133196 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:22:39.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6796" for this suite. • [SLOW TEST:246.120 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":272,"skipped":4541,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:22:39.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 6 21:22:40.477: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 6 21:22:42.488: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396960, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396960, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396960, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396960, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 21:22:44.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396960, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396960, loc:(*time.Location)(0x7c2f200)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396960, loc:(*time.Location)(0x7c2f200)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724396960, loc:(*time.Location)(0x7c2f200)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 21:22:47.712: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 21:22:47.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:22:48.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-756" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.306 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":288,"completed":273,"skipped":4542,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:22:48.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 6 21:22:55.678: INFO: Waiting up to 5m0s for pod "client-envvars-0e1f3d87-51cd-47d1-9f41-6e0953f3a35e" in namespace "pods-567" to be "Succeeded or Failed" May 6 21:22:55.687: INFO: Pod "client-envvars-0e1f3d87-51cd-47d1-9f41-6e0953f3a35e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.027274ms May 6 21:22:57.692: INFO: Pod "client-envvars-0e1f3d87-51cd-47d1-9f41-6e0953f3a35e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013235142s May 6 21:22:59.699: INFO: Pod "client-envvars-0e1f3d87-51cd-47d1-9f41-6e0953f3a35e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02047549s May 6 21:23:01.838: INFO: Pod "client-envvars-0e1f3d87-51cd-47d1-9f41-6e0953f3a35e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.159660345s STEP: Saw pod success May 6 21:23:01.838: INFO: Pod "client-envvars-0e1f3d87-51cd-47d1-9f41-6e0953f3a35e" satisfied condition "Succeeded or Failed" May 6 21:23:01.841: INFO: Trying to get logs from node latest-worker pod client-envvars-0e1f3d87-51cd-47d1-9f41-6e0953f3a35e container env3cont: STEP: delete the pod May 6 21:23:02.350: INFO: Waiting for pod client-envvars-0e1f3d87-51cd-47d1-9f41-6e0953f3a35e to disappear May 6 21:23:02.743: INFO: Pod client-envvars-0e1f3d87-51cd-47d1-9f41-6e0953f3a35e no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:23:02.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-567" for this suite. • [SLOW TEST:13.985 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":288,"completed":274,"skipped":4547,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:23:02.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 6 21:23:11.796: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:23:11.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4054" for this suite. • [SLOW TEST:9.197 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":288,"completed":275,"skipped":4580,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:23:12.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition May 6 21:23:13.657: INFO: Waiting up to 5m0s for pod "var-expansion-1e3ef61a-5ba6-4329-a4a4-3a9278ea5597" in namespace "var-expansion-2254" to be "Succeeded or Failed" May 6 21:23:13.953: INFO: Pod "var-expansion-1e3ef61a-5ba6-4329-a4a4-3a9278ea5597": Phase="Pending", Reason="", readiness=false. Elapsed: 296.540959ms May 6 21:23:15.957: INFO: Pod "var-expansion-1e3ef61a-5ba6-4329-a4a4-3a9278ea5597": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300531595s May 6 21:23:17.994: INFO: Pod "var-expansion-1e3ef61a-5ba6-4329-a4a4-3a9278ea5597": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336956672s May 6 21:23:19.999: INFO: Pod "var-expansion-1e3ef61a-5ba6-4329-a4a4-3a9278ea5597": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.341714885s STEP: Saw pod success May 6 21:23:19.999: INFO: Pod "var-expansion-1e3ef61a-5ba6-4329-a4a4-3a9278ea5597" satisfied condition "Succeeded or Failed" May 6 21:23:20.001: INFO: Trying to get logs from node latest-worker pod var-expansion-1e3ef61a-5ba6-4329-a4a4-3a9278ea5597 container dapi-container: STEP: delete the pod May 6 21:23:20.197: INFO: Waiting for pod var-expansion-1e3ef61a-5ba6-4329-a4a4-3a9278ea5597 to disappear May 6 21:23:20.232: INFO: Pod var-expansion-1e3ef61a-5ba6-4329-a4a4-3a9278ea5597 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:23:20.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2254" for this suite. • [SLOW TEST:8.064 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":288,"completed":276,"skipped":4596,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:23:20.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 6 21:23:20.379: INFO: Waiting up to 5m0s for pod "pod-c4516109-8c43-4d97-991e-b92c4d5402cb" in namespace "emptydir-9092" to be "Succeeded or Failed" May 6 21:23:20.399: INFO: Pod "pod-c4516109-8c43-4d97-991e-b92c4d5402cb": Phase="Pending", Reason="", readiness=false. Elapsed: 20.456432ms May 6 21:23:22.403: INFO: Pod "pod-c4516109-8c43-4d97-991e-b92c4d5402cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024503102s May 6 21:23:24.406: INFO: Pod "pod-c4516109-8c43-4d97-991e-b92c4d5402cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027673186s STEP: Saw pod success May 6 21:23:24.406: INFO: Pod "pod-c4516109-8c43-4d97-991e-b92c4d5402cb" satisfied condition "Succeeded or Failed" May 6 21:23:24.409: INFO: Trying to get logs from node latest-worker2 pod pod-c4516109-8c43-4d97-991e-b92c4d5402cb container test-container: STEP: delete the pod May 6 21:23:24.462: INFO: Waiting for pod pod-c4516109-8c43-4d97-991e-b92c4d5402cb to disappear May 6 21:23:24.471: INFO: Pod pod-c4516109-8c43-4d97-991e-b92c4d5402cb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:23:24.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9092" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":277,"skipped":4631,"failed":0} SS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:23:24.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7774 STEP: creating service affinity-nodeport-transition in namespace services-7774 STEP: creating replication controller affinity-nodeport-transition in namespace services-7774 I0506 21:23:24.627787 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-7774, replica count: 3 I0506 21:23:27.678166 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 21:23:30.678404 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 21:23:30.712: INFO: Creating new exec pod May 6 21:23:35.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7774 execpod-affinitym4w9p -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' May 6 21:23:39.080: INFO: stderr: "I0506 21:23:38.967663 3811 log.go:172] (0xc00003a420) (0xc0006f8c80) Create stream\nI0506 21:23:38.967718 3811 log.go:172] (0xc00003a420) (0xc0006f8c80) Stream added, broadcasting: 1\nI0506 21:23:38.970511 3811 log.go:172] (0xc00003a420) Reply frame received for 1\nI0506 21:23:38.970560 3811 log.go:172] (0xc00003a420) (0xc0006f9c20) Create stream\nI0506 21:23:38.970571 3811 log.go:172] (0xc00003a420) (0xc0006f9c20) Stream added, broadcasting: 3\nI0506 21:23:38.971519 3811 log.go:172] (0xc00003a420) Reply frame received for 3\nI0506 21:23:38.971560 3811 log.go:172] (0xc00003a420) (0xc0006f0500) Create stream\nI0506 21:23:38.971572 3811 log.go:172] (0xc00003a420) (0xc0006f0500) Stream added, broadcasting: 5\nI0506 21:23:38.972585 3811 log.go:172] (0xc00003a420) Reply frame received for 5\nI0506 21:23:39.072103 3811 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:23:39.072143 3811 log.go:172] (0xc0006f0500) (5) Data frame handling\nI0506 21:23:39.072174 3811 log.go:172] (0xc0006f0500) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0506 21:23:39.072456 3811 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:23:39.072477 3811 log.go:172] (0xc0006f0500) (5) Data frame handling\nI0506 21:23:39.072525 3811 log.go:172] (0xc0006f0500) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0506 21:23:39.072839 3811 log.go:172] (0xc00003a420) Data frame received for 5\nI0506 21:23:39.072858 3811 log.go:172] (0xc0006f0500) (5) Data frame handling\nI0506 21:23:39.073002 3811 log.go:172] (0xc00003a420) Data frame received for 3\nI0506 21:23:39.073025 3811 log.go:172] (0xc0006f9c20) (3) Data frame handling\nI0506 21:23:39.074800 3811 log.go:172] (0xc00003a420) Data frame received for 1\nI0506 21:23:39.074823 3811 log.go:172] (0xc0006f8c80) (1) Data frame handling\nI0506 21:23:39.074838 3811 log.go:172] (0xc0006f8c80) (1) Data frame sent\nI0506 21:23:39.074853 3811 log.go:172] (0xc00003a420) (0xc0006f8c80) Stream removed, broadcasting: 1\nI0506 21:23:39.074883 3811 log.go:172] (0xc00003a420) Go away received\nI0506 21:23:39.075242 3811 log.go:172] (0xc00003a420) (0xc0006f8c80) Stream removed, broadcasting: 1\nI0506 21:23:39.075258 3811 log.go:172] (0xc00003a420) (0xc0006f9c20) Stream removed, broadcasting: 3\nI0506 21:23:39.075265 3811 log.go:172] (0xc00003a420) (0xc0006f0500) Stream removed, broadcasting: 5\n" May 6 21:23:39.080: INFO: stdout: "" May 6 21:23:39.081: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7774 execpod-affinitym4w9p -- /bin/sh -x -c nc -zv -t -w 2 10.96.235.194 80' May 6 21:23:39.284: INFO: stderr: "I0506 21:23:39.213787 3845 log.go:172] (0xc000728000) (0xc000948960) Create stream\nI0506 21:23:39.213857 3845 log.go:172] (0xc000728000) (0xc000948960) Stream added, broadcasting: 1\nI0506 21:23:39.215334 3845 log.go:172] (0xc000728000) Reply frame received for 1\nI0506 21:23:39.215386 3845 log.go:172] (0xc000728000) (0xc00093ebe0) Create stream\nI0506 21:23:39.215407 3845 log.go:172] (0xc000728000) (0xc00093ebe0) Stream added, broadcasting: 3\nI0506 21:23:39.216175 3845 log.go:172] (0xc000728000) Reply frame received for 3\nI0506 21:23:39.216222 3845 log.go:172] (0xc000728000) (0xc000948e60) Create stream\nI0506 21:23:39.216241 3845 log.go:172] (0xc000728000) (0xc000948e60) Stream added, broadcasting: 5\nI0506 21:23:39.217107 3845 log.go:172] (0xc000728000) Reply frame received for 5\nI0506 21:23:39.277481 3845 log.go:172] (0xc000728000) Data frame received for 3\nI0506 21:23:39.277516 3845 log.go:172] (0xc00093ebe0) (3) Data frame handling\nI0506 21:23:39.277752 3845 log.go:172] (0xc000728000) Data frame received for 5\nI0506 21:23:39.277795 3845 log.go:172] (0xc000948e60) (5) Data frame handling\nI0506 21:23:39.277826 3845 log.go:172] (0xc000948e60) (5) Data frame sent\nI0506 21:23:39.277842 3845 log.go:172] (0xc000728000) Data frame received for 5\nI0506 21:23:39.277855 3845 log.go:172] (0xc000948e60) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.235.194 80\nConnection to 10.96.235.194 80 port [tcp/http] succeeded!\nI0506 21:23:39.278925 3845 log.go:172] (0xc000728000) Data frame received for 1\nI0506 21:23:39.278954 3845 log.go:172] (0xc000948960) (1) Data frame handling\nI0506 21:23:39.278985 3845 log.go:172] (0xc000948960) (1) Data frame sent\nI0506 21:23:39.279014 3845 log.go:172] (0xc000728000) (0xc000948960) Stream removed, broadcasting: 1\nI0506 21:23:39.279028 3845 log.go:172] (0xc000728000) Go away received\nI0506 21:23:39.279546 3845 log.go:172] (0xc000728000) (0xc000948960) Stream removed, broadcasting: 1\nI0506 21:23:39.279587 3845 log.go:172] (0xc000728000) (0xc00093ebe0) Stream removed, broadcasting: 3\nI0506 21:23:39.279615 3845 log.go:172] (0xc000728000) (0xc000948e60) Stream removed, broadcasting: 5\n" May 6 21:23:39.284: INFO: stdout: "" May 6 21:23:39.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7774 execpod-affinitym4w9p -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32218' May 6 21:23:39.480: INFO: stderr: "I0506 21:23:39.410286 3865 log.go:172] (0xc000aae000) (0xc0004d0500) Create stream\nI0506 21:23:39.410349 3865 log.go:172] (0xc000aae000) (0xc0004d0500) Stream added, broadcasting: 1\nI0506 21:23:39.413001 3865 log.go:172] (0xc000aae000) Reply frame received for 1\nI0506 21:23:39.413032 3865 log.go:172] (0xc000aae000) (0xc000392140) Create stream\nI0506 21:23:39.413040 3865 log.go:172] (0xc000aae000) (0xc000392140) Stream added, broadcasting: 3\nI0506 21:23:39.414294 3865 log.go:172] (0xc000aae000) Reply frame received for 3\nI0506 21:23:39.414341 3865 log.go:172] (0xc000aae000) (0xc0006bedc0) Create stream\nI0506 21:23:39.414354 3865 log.go:172] (0xc000aae000) (0xc0006bedc0) Stream added, broadcasting: 5\nI0506 21:23:39.415295 3865 log.go:172] (0xc000aae000) Reply frame received for 5\nI0506 21:23:39.472403 3865 log.go:172] (0xc000aae000) Data frame received for 5\nI0506 21:23:39.472459 3865 log.go:172] (0xc0006bedc0) (5) Data frame handling\nI0506 21:23:39.472490 3865 log.go:172] (0xc0006bedc0) (5) Data frame sent\nI0506 21:23:39.472512 3865 log.go:172] (0xc000aae000) Data frame received for 5\nI0506 21:23:39.472534 3865 log.go:172] (0xc0006bedc0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 32218\nConnection to 172.17.0.13 32218 port [tcp/32218] succeeded!\nI0506 21:23:39.472563 3865 log.go:172] (0xc0006bedc0) (5) Data frame sent\nI0506 21:23:39.472949 3865 log.go:172] (0xc000aae000) Data frame received for 5\nI0506 21:23:39.472978 3865 log.go:172] (0xc0006bedc0) (5) Data frame handling\nI0506 21:23:39.473532 3865 log.go:172] (0xc000aae000) Data frame received for 3\nI0506 21:23:39.473553 3865 log.go:172] (0xc000392140) (3) Data frame handling\nI0506 21:23:39.475170 3865 log.go:172] (0xc000aae000) Data frame received for 1\nI0506 21:23:39.475203 3865 log.go:172] (0xc0004d0500) (1) Data frame handling\nI0506 21:23:39.475223 3865 log.go:172] (0xc0004d0500) (1) Data frame sent\nI0506 21:23:39.475238 3865 log.go:172] (0xc000aae000) (0xc0004d0500) Stream removed, broadcasting: 1\nI0506 21:23:39.475257 3865 log.go:172] (0xc000aae000) Go away received\nI0506 21:23:39.475663 3865 log.go:172] (0xc000aae000) (0xc0004d0500) Stream removed, broadcasting: 1\nI0506 21:23:39.475682 3865 log.go:172] (0xc000aae000) (0xc000392140) Stream removed, broadcasting: 3\nI0506 21:23:39.475703 3865 log.go:172] (0xc000aae000) (0xc0006bedc0) Stream removed, broadcasting: 5\n" May 6 21:23:39.480: INFO: stdout: "" May 6 21:23:39.481: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7774 execpod-affinitym4w9p -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32218' May 6 21:23:39.730: INFO: stderr: "I0506 21:23:39.642082 3885 log.go:172] (0xc000b3d4a0) (0xc000be8280) Create stream\nI0506 21:23:39.642154 3885 log.go:172] (0xc000b3d4a0) (0xc000be8280) Stream added, broadcasting: 1\nI0506 21:23:39.647587 3885 log.go:172] (0xc000b3d4a0) Reply frame received for 1\nI0506 21:23:39.647645 3885 log.go:172] (0xc000b3d4a0) (0xc0006fed20) Create stream\nI0506 21:23:39.647660 3885 log.go:172] (0xc000b3d4a0) (0xc0006fed20) Stream added, broadcasting: 3\nI0506 21:23:39.648625 3885 log.go:172] (0xc000b3d4a0) Reply frame received for 3\nI0506 21:23:39.648670 3885 log.go:172] (0xc000b3d4a0) (0xc00057c280) Create stream\nI0506 21:23:39.648686 3885 log.go:172] (0xc000b3d4a0) (0xc00057c280) Stream added, broadcasting: 5\nI0506 21:23:39.649837 3885 log.go:172] (0xc000b3d4a0) Reply frame received for 5\nI0506 21:23:39.723779 3885 log.go:172] (0xc000b3d4a0) Data frame received for 5\nI0506 21:23:39.723805 3885 log.go:172] (0xc00057c280) (5) Data frame handling\nI0506 21:23:39.723817 3885 log.go:172] (0xc00057c280) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 32218\nConnection to 172.17.0.12 32218 port [tcp/32218] succeeded!\nI0506 21:23:39.724082 3885 log.go:172] (0xc000b3d4a0) Data frame received for 3\nI0506 21:23:39.724092 3885 log.go:172] (0xc0006fed20) (3) Data frame handling\nI0506 21:23:39.724115 3885 log.go:172] (0xc000b3d4a0) Data frame received for 5\nI0506 21:23:39.724131 3885 log.go:172] (0xc00057c280) (5) Data frame handling\nI0506 21:23:39.726028 3885 log.go:172] (0xc000b3d4a0) Data frame received for 1\nI0506 21:23:39.726049 3885 log.go:172] (0xc000be8280) (1) Data frame handling\nI0506 21:23:39.726059 3885 log.go:172] (0xc000be8280) (1) Data frame sent\nI0506 21:23:39.726070 3885 log.go:172] (0xc000b3d4a0) (0xc000be8280) Stream removed, broadcasting: 1\nI0506 21:23:39.726272 3885 log.go:172] (0xc000b3d4a0) Go away received\nI0506 21:23:39.726299 3885 log.go:172] (0xc000b3d4a0) (0xc000be8280) Stream removed, broadcasting: 1\nI0506 21:23:39.726315 3885 log.go:172] (0xc000b3d4a0) (0xc0006fed20) Stream removed, broadcasting: 3\nI0506 21:23:39.726321 3885 log.go:172] (0xc000b3d4a0) (0xc00057c280) Stream removed, broadcasting: 5\n" May 6 21:23:39.730: INFO: stdout: "" May 6 21:23:39.738: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7774 execpod-affinitym4w9p -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:32218/ ; done' May 6 21:23:40.050: INFO: stderr: "I0506 21:23:39.888484 3907 log.go:172] (0xc000970000) (0xc000730dc0) Create stream\nI0506 21:23:39.888574 3907 log.go:172] (0xc000970000) (0xc000730dc0) Stream added, broadcasting: 1\nI0506 21:23:39.890657 3907 log.go:172] (0xc000970000) Reply frame received for 1\nI0506 21:23:39.890712 3907 log.go:172] (0xc000970000) (0xc000731d60) Create stream\nI0506 21:23:39.890725 3907 log.go:172] (0xc000970000) (0xc000731d60) Stream added, broadcasting: 3\nI0506 21:23:39.891588 3907 log.go:172] (0xc000970000) Reply frame received for 3\nI0506 21:23:39.891624 3907 log.go:172] (0xc000970000) (0xc0008346e0) Create stream\nI0506 21:23:39.891635 3907 log.go:172] (0xc000970000) (0xc0008346e0) Stream added, broadcasting: 5\nI0506 21:23:39.892461 3907 log.go:172] (0xc000970000) Reply frame received for 5\nI0506 21:23:39.955150 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:39.955193 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:39.955204 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:39.955225 3907 log.go:172] (0xc000970000) Data frame received for 5\nI0506 21:23:39.955232 3907 log.go:172] (0xc0008346e0) (5) Data frame handling\nI0506 21:23:39.955241 3907 log.go:172] (0xc0008346e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:39.960448 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:39.960472 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:39.960485 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:39.960906 3907 log.go:172] (0xc000970000) Data frame received for 5\nI0506 21:23:39.960920 3907 log.go:172] (0xc0008346e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeoutI0506 21:23:39.960934 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:39.960950 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:39.960965 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:39.960983 3907 log.go:172] (0xc0008346e0) (5) Data frame sent\nI0506 21:23:39.960992 3907 log.go:172] (0xc000970000) Data frame received for 5\nI0506 21:23:39.961003 3907 log.go:172] (0xc0008346e0) (5) Data frame handling\nI0506 21:23:39.961011 3907 log.go:172] (0xc0008346e0) (5) Data frame sent\n 2 http://172.17.0.13:32218/\nI0506 21:23:39.967362 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:39.967395 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:39.967427 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:39.967947 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:39.967971 3907 log.go:172] (0xc000970000) Data frame received for 5\nI0506 21:23:39.968007 3907 log.go:172] (0xc0008346e0) (5) Data frame handling\nI0506 21:23:39.968022 3907 log.go:172] (0xc0008346e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:39.968036 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:39.968044 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:39.972184 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:39.972199 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:39.972209 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:39.972839 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:39.972859 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:39.972869 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:39.972893 3907 log.go:172] (0xc000970000) Data frame received for 5\nI0506 21:23:39.972920 3907 log.go:172] (0xc0008346e0) (5) Data frame handling\nI0506 21:23:39.972943 3907 log.go:172] (0xc0008346e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:39.977815 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:39.977838 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:39.977861 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:39.978385 3907 log.go:172] (0xc000970000) Data frame received for 5\nI0506 21:23:39.978406 3907 log.go:172] (0xc0008346e0) (5) Data frame handling\nI0506 21:23:39.978417 3907 log.go:172] (0xc0008346e0) (5) Data frame sent\n+ I0506 21:23:39.978435 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:39.978490 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:39.978515 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:39.978550 3907 log.go:172] (0xc000970000) Data frame received for 5\nI0506 21:23:39.978567 3907 log.go:172] (0xc0008346e0) (5) Data frame handling\nI0506 21:23:39.978598 3907 log.go:172] (0xc0008346e0) (5) Data frame sent\necho\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:39.985779 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:39.985801 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:39.985831 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:39.986545 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:39.986585 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:39.986600 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:39.986620 3907 log.go:172] (0xc000970000) Data frame received for 5\nI0506 21:23:39.986637 3907 log.go:172] (0xc0008346e0) (5) Data frame handling\nI0506 21:23:39.986662 3907 log.go:172] (0xc0008346e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:39.990582 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:39.990605 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:39.990631 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:39.990960 3907 log.go:172] (0xc000970000) Data frame received for 5\nI0506 21:23:39.990987 3907 log.go:172] (0xc0008346e0) (5) Data frame handling\nI0506 21:23:39.991008 3907 log.go:172] (0xc0008346e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:39.991140 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:39.991156 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:39.991170 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:39.994786 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:39.994809 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:39.994836 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:39.995189 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:39.995204 3907 log.go:172] (0xc000970000) Data frame received for 5\nI0506 21:23:39.995217 3907 log.go:172] (0xc0008346e0) (5) Data frame handling\nI0506 21:23:39.995223 3907 log.go:172] (0xc0008346e0) (5) Data frame sent\nI0506 21:23:39.995228 3907 log.go:172] (0xc000970000) Data frame received for 5\nI0506 21:23:39.995233 3907 log.go:172] (0xc0008346e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:39.995252 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:39.995272 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:39.995291 3907 log.go:172] (0xc0008346e0) (5) Data frame sent\nI0506 21:23:40.000064 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:40.000082 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:40.000097 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:40.000678 3907 log.go:172] (0xc000970000) Data frame received for 5\nI0506 21:23:40.000690 3907 log.go:172] (0xc0008346e0) (5) Data frame handling\nI0506 21:23:40.000703 3907 log.go:172] (0xc0008346e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.000753 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:40.000765 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:40.000778 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:40.004928 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:40.004940 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:40.004946 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:40.005637 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:40.005655 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:40.005673 3907 log.go:172] (0xc000970000) Data frame received for 5\nI0506 21:23:40.005701 3907 log.go:172] (0xc0008346e0) (5) Data frame handling\nI0506 21:23:40.005717 3907 log.go:172] (0xc0008346e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.005734 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:40.009580 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:40.009592 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:40.009600 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:40.010511 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:40.010531 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:40.010544 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:40.010558 3907 log.go:172] (0xc000970000) Data frame received for 5\nI0506 21:23:40.010571 3907 log.go:172] (0xc0008346e0) (5) Data frame handling\nI0506 21:23:40.010583 3907 log.go:172] (0xc0008346e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.014503 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:40.014540 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:40.014573 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:40.015175 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:40.015194 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:40.015204 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:40.015218 3907 log.go:172] (0xc000970000) Data frame received for 5\nI0506 21:23:40.015224 3907 log.go:172] (0xc0008346e0) (5) Data frame handling\nI0506 21:23:40.015231 3907 log.go:172] (0xc0008346e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.019973 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:40.019988 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:40.020002 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:40.020659 3907 log.go:172] (0xc000970000) Data frame received for 5\nI0506 21:23:40.020686 3907 log.go:172] (0xc0008346e0) (5) Data frame handling\nI0506 21:23:40.020711 3907 log.go:172] (0xc0008346e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.020745 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:40.020765 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:40.020785 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:40.027131 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:40.027156 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:40.027192 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:40.027447 3907 log.go:172] (0xc000970000) Data frame received for 5\nI0506 21:23:40.027468 3907 log.go:172] (0xc0008346e0) (5) Data frame handling\nI0506 21:23:40.027477 3907 log.go:172] (0xc0008346e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.027640 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:40.027651 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:40.027658 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:40.032696 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:40.032717 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:40.032729 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:40.033279 3907 log.go:172] (0xc000970000) Data frame received for 5\nI0506 21:23:40.033310 3907 log.go:172] (0xc0008346e0) (5) Data frame handling\nI0506 21:23:40.033324 3907 log.go:172] (0xc0008346e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.033350 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:40.033374 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:40.033397 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:40.036754 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:40.036777 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:40.036799 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:40.037442 3907 log.go:172] (0xc000970000) Data frame received for 5\nI0506 21:23:40.037467 3907 log.go:172] (0xc0008346e0) (5) Data frame handling\nI0506 21:23:40.037484 3907 log.go:172] (0xc0008346e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.037508 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:40.037520 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:40.037528 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:40.041913 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:40.041932 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:40.041948 3907 log.go:172] (0xc000731d60) (3) Data frame sent\nI0506 21:23:40.042880 3907 log.go:172] (0xc000970000) Data frame received for 3\nI0506 21:23:40.042899 3907 log.go:172] (0xc000731d60) (3) Data frame handling\nI0506 21:23:40.043008 3907 log.go:172] (0xc000970000) Data frame received for 5\nI0506 21:23:40.043044 3907 log.go:172] (0xc0008346e0) (5) Data frame handling\nI0506 21:23:40.044670 3907 log.go:172] (0xc000970000) Data frame received for 1\nI0506 21:23:40.044691 3907 log.go:172] (0xc000730dc0) (1) Data frame handling\nI0506 21:23:40.044716 3907 log.go:172] (0xc000730dc0) (1) Data frame sent\nI0506 21:23:40.044748 3907 log.go:172] (0xc000970000) (0xc000730dc0) Stream removed, broadcasting: 1\nI0506 21:23:40.044770 3907 log.go:172] (0xc000970000) Go away received\nI0506 21:23:40.045478 3907 log.go:172] (0xc000970000) (0xc000730dc0) Stream removed, broadcasting: 1\nI0506 21:23:40.045511 3907 log.go:172] (0xc000970000) (0xc000731d60) Stream removed, broadcasting: 3\nI0506 21:23:40.045527 3907 log.go:172] (0xc000970000) (0xc0008346e0) Stream removed, broadcasting: 5\n" May 6 21:23:40.051: INFO: stdout: "\naffinity-nodeport-transition-8w5qs\naffinity-nodeport-transition-7ksc4\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-7ksc4\naffinity-nodeport-transition-7ksc4\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-8w5qs\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-7ksc4\naffinity-nodeport-transition-7ksc4\naffinity-nodeport-transition-8w5qs\naffinity-nodeport-transition-8w5qs\naffinity-nodeport-transition-vk6mk" May 6 21:23:40.051: INFO: Received response from host: May 6 21:23:40.051: INFO: Received response from host: affinity-nodeport-transition-8w5qs May 6 21:23:40.051: INFO: Received response from host: affinity-nodeport-transition-7ksc4 May 6 21:23:40.051: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.051: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.051: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.051: INFO: Received response from host: affinity-nodeport-transition-7ksc4 May 6 21:23:40.051: INFO: Received response from host: affinity-nodeport-transition-7ksc4 May 6 21:23:40.051: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.051: INFO: Received response from host: affinity-nodeport-transition-8w5qs May 6 21:23:40.051: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.051: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.051: INFO: Received response from host: affinity-nodeport-transition-7ksc4 May 6 21:23:40.051: INFO: Received response from host: affinity-nodeport-transition-7ksc4 May 6 21:23:40.051: INFO: Received response from host: affinity-nodeport-transition-8w5qs May 6 21:23:40.051: INFO: Received response from host: affinity-nodeport-transition-8w5qs May 6 21:23:40.051: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7774 execpod-affinitym4w9p -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:32218/ ; done' May 6 21:23:40.411: INFO: stderr: "I0506 21:23:40.224679 3929 log.go:172] (0xc0000e96b0) (0xc000944320) Create stream\nI0506 21:23:40.224733 3929 log.go:172] (0xc0000e96b0) (0xc000944320) Stream added, broadcasting: 1\nI0506 21:23:40.234437 3929 log.go:172] (0xc0000e96b0) Reply frame received for 1\nI0506 21:23:40.234502 3929 log.go:172] (0xc0000e96b0) (0xc0004bc280) Create stream\nI0506 21:23:40.234526 3929 log.go:172] (0xc0000e96b0) (0xc0004bc280) Stream added, broadcasting: 3\nI0506 21:23:40.235549 3929 log.go:172] (0xc0000e96b0) Reply frame received for 3\nI0506 21:23:40.235596 3929 log.go:172] (0xc0000e96b0) (0xc000478dc0) Create stream\nI0506 21:23:40.235607 3929 log.go:172] (0xc0000e96b0) (0xc000478dc0) Stream added, broadcasting: 5\nI0506 21:23:40.236321 3929 log.go:172] (0xc0000e96b0) Reply frame received for 5\nI0506 21:23:40.312679 3929 log.go:172] (0xc0000e96b0) Data frame received for 5\nI0506 21:23:40.312733 3929 log.go:172] (0xc000478dc0) (5) Data frame handling\nI0506 21:23:40.312758 3929 log.go:172] (0xc000478dc0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.312792 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.312804 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.312821 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.319173 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.319206 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.319229 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.319660 3929 log.go:172] (0xc0000e96b0) Data frame received for 5\nI0506 21:23:40.319690 3929 log.go:172] (0xc000478dc0) (5) Data frame handling\nI0506 21:23:40.319713 3929 log.go:172] (0xc000478dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.319749 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.319764 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.319785 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.326070 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.326094 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.326114 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.327033 3929 log.go:172] (0xc0000e96b0) Data frame received for 5\nI0506 21:23:40.327061 3929 log.go:172] (0xc000478dc0) (5) Data frame handling\nI0506 21:23:40.327103 3929 log.go:172] (0xc000478dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.327296 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.327334 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.327380 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.332012 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.332041 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.332062 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.332559 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.332612 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.332647 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.332690 3929 log.go:172] (0xc0000e96b0) Data frame received for 5\nI0506 21:23:40.332719 3929 log.go:172] (0xc000478dc0) (5) Data frame handling\nI0506 21:23:40.332749 3929 log.go:172] (0xc000478dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.338578 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.338598 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.338631 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.339484 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.339508 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.339537 3929 log.go:172] (0xc0000e96b0) Data frame received for 5\nI0506 21:23:40.339583 3929 log.go:172] (0xc000478dc0) (5) Data frame handling\nI0506 21:23:40.339612 3929 log.go:172] (0xc000478dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.339660 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.347438 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.347461 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.347486 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.348118 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.348158 3929 log.go:172] (0xc0000e96b0) Data frame received for 5\nI0506 21:23:40.348192 3929 log.go:172] (0xc000478dc0) (5) Data frame handling\nI0506 21:23:40.348207 3929 log.go:172] (0xc000478dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.348223 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.348237 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.353858 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.353878 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.353891 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.353911 3929 log.go:172] (0xc0000e96b0) Data frame received for 5\nI0506 21:23:40.353937 3929 log.go:172] (0xc000478dc0) (5) Data frame handling\nI0506 21:23:40.353965 3929 log.go:172] (0xc000478dc0) (5) Data frame sent\nI0506 21:23:40.353977 3929 log.go:172] (0xc0000e96b0) Data frame received for 5\nI0506 21:23:40.353987 3929 log.go:172] (0xc000478dc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.354023 3929 log.go:172] (0xc000478dc0) (5) Data frame sent\nI0506 21:23:40.354123 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.354151 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.354171 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.358775 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.358787 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.358792 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.359628 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.359649 3929 log.go:172] (0xc0000e96b0) Data frame received for 5\nI0506 21:23:40.359672 3929 log.go:172] (0xc000478dc0) (5) Data frame handling\nI0506 21:23:40.359677 3929 log.go:172] (0xc000478dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.359686 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.359691 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.363672 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.363700 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.363721 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.364144 3929 log.go:172] (0xc0000e96b0) Data frame received for 5\nI0506 21:23:40.364179 3929 log.go:172] (0xc000478dc0) (5) Data frame handling\nI0506 21:23:40.364196 3929 log.go:172] (0xc000478dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.364213 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.364229 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.364258 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.370448 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.370467 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.370484 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.370946 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.370966 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.371006 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.371022 3929 log.go:172] (0xc0000e96b0) Data frame received for 5\nI0506 21:23:40.371032 3929 log.go:172] (0xc000478dc0) (5) Data frame handling\nI0506 21:23:40.371048 3929 log.go:172] (0xc000478dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.375434 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.375452 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.375467 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.375921 3929 log.go:172] (0xc0000e96b0) Data frame received for 5\nI0506 21:23:40.375945 3929 log.go:172] (0xc000478dc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.375962 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.375986 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.375998 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.376012 3929 log.go:172] (0xc000478dc0) (5) Data frame sent\nI0506 21:23:40.380042 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.380066 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.380091 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.380440 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.380462 3929 log.go:172] (0xc0000e96b0) Data frame received for 5\nI0506 21:23:40.380491 3929 log.go:172] (0xc000478dc0) (5) Data frame handling\nI0506 21:23:40.380505 3929 log.go:172] (0xc000478dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.380523 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.380543 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.385606 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.385628 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.385644 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.385977 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.386001 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.386030 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.386042 3929 log.go:172] (0xc0000e96b0) Data frame received for 5\nI0506 21:23:40.386059 3929 log.go:172] (0xc000478dc0) (5) Data frame handling\nI0506 21:23:40.386068 3929 log.go:172] (0xc000478dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.389997 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.390030 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.390065 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.390404 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.390440 3929 log.go:172] (0xc0000e96b0) Data frame received for 5\nI0506 21:23:40.390458 3929 log.go:172] (0xc000478dc0) (5) Data frame handling\nI0506 21:23:40.390468 3929 log.go:172] (0xc000478dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.390481 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.390489 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.394071 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.394096 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.394119 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.394407 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.394427 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.394444 3929 log.go:172] (0xc0000e96b0) Data frame received for 5\nI0506 21:23:40.394465 3929 log.go:172] (0xc000478dc0) (5) Data frame handling\nI0506 21:23:40.394478 3929 log.go:172] (0xc000478dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.394504 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.399239 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.399273 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.399294 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.399773 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.399803 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.399828 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.399873 3929 log.go:172] (0xc0000e96b0) Data frame received for 5\nI0506 21:23:40.399897 3929 log.go:172] (0xc000478dc0) (5) Data frame handling\nI0506 21:23:40.399935 3929 log.go:172] (0xc000478dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32218/\nI0506 21:23:40.403942 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.403966 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.403981 3929 log.go:172] (0xc0004bc280) (3) Data frame sent\nI0506 21:23:40.404307 3929 log.go:172] (0xc0000e96b0) Data frame received for 3\nI0506 21:23:40.404320 3929 log.go:172] (0xc0004bc280) (3) Data frame handling\nI0506 21:23:40.404636 3929 log.go:172] (0xc0000e96b0) Data frame received for 5\nI0506 21:23:40.404665 3929 log.go:172] (0xc000478dc0) (5) Data frame handling\nI0506 21:23:40.406255 3929 log.go:172] (0xc0000e96b0) Data frame received for 1\nI0506 21:23:40.406294 3929 log.go:172] (0xc000944320) (1) Data frame handling\nI0506 21:23:40.406309 3929 log.go:172] (0xc000944320) (1) Data frame sent\nI0506 21:23:40.406340 3929 log.go:172] (0xc0000e96b0) (0xc000944320) Stream removed, broadcasting: 1\nI0506 21:23:40.406366 3929 log.go:172] (0xc0000e96b0) Go away received\nI0506 21:23:40.406723 3929 log.go:172] (0xc0000e96b0) (0xc000944320) Stream removed, broadcasting: 1\nI0506 21:23:40.406742 3929 log.go:172] (0xc0000e96b0) (0xc0004bc280) Stream removed, broadcasting: 3\nI0506 21:23:40.406752 3929 log.go:172] (0xc0000e96b0) (0xc000478dc0) Stream removed, broadcasting: 5\n" May 6 21:23:40.412: INFO: stdout: "\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-vk6mk\naffinity-nodeport-transition-vk6mk" May 6 21:23:40.412: INFO: Received response from host: May 6 21:23:40.412: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.412: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.412: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.412: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.412: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.412: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.412: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.412: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.412: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.412: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.412: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.412: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.412: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.412: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.412: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.412: INFO: Received response from host: affinity-nodeport-transition-vk6mk May 6 21:23:40.412: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-7774, will wait for the garbage collector to delete the pods May 6 21:23:40.529: INFO: Deleting ReplicationController affinity-nodeport-transition took: 19.037256ms May 6 21:23:40.830: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 300.282988ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:23:54.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7774" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:30.518 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":278,"skipped":4633,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:23:54.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-fd5570e9-ee02-4f65-a89a-a8c0d8dacdd6 STEP: Creating a pod to test consume configMaps May 6 21:23:55.096: INFO: Waiting up to 5m0s for pod "pod-configmaps-69d10ac1-ab76-472b-b3a4-146ec221eb68" in namespace "configmap-3789" to be "Succeeded or Failed" May 6 21:23:55.147: INFO: Pod "pod-configmaps-69d10ac1-ab76-472b-b3a4-146ec221eb68": Phase="Pending", Reason="", readiness=false. Elapsed: 51.22506ms May 6 21:23:57.152: INFO: Pod "pod-configmaps-69d10ac1-ab76-472b-b3a4-146ec221eb68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055796818s May 6 21:23:59.156: INFO: Pod "pod-configmaps-69d10ac1-ab76-472b-b3a4-146ec221eb68": Phase="Running", Reason="", readiness=true. Elapsed: 4.06007199s May 6 21:24:01.162: INFO: Pod "pod-configmaps-69d10ac1-ab76-472b-b3a4-146ec221eb68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06560629s STEP: Saw pod success May 6 21:24:01.162: INFO: Pod "pod-configmaps-69d10ac1-ab76-472b-b3a4-146ec221eb68" satisfied condition "Succeeded or Failed" May 6 21:24:01.164: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-69d10ac1-ab76-472b-b3a4-146ec221eb68 container configmap-volume-test: STEP: delete the pod May 6 21:24:01.211: INFO: Waiting for pod pod-configmaps-69d10ac1-ab76-472b-b3a4-146ec221eb68 to disappear May 6 21:24:01.221: INFO: Pod pod-configmaps-69d10ac1-ab76-472b-b3a4-146ec221eb68 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:24:01.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3789" for this suite. • [SLOW TEST:6.229 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":279,"skipped":4662,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:24:01.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 6 21:24:01.352: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 6 21:24:01.413: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 6 21:24:01.413: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 6 21:24:01.497: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 6 21:24:01.497: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 6 21:24:01.573: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 6 21:24:01.573: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 6 21:24:09.158: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:24:09.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-79" for this suite. • [SLOW TEST:8.041 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":288,"completed":280,"skipped":4717,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:24:09.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 6 21:24:09.427: INFO: Waiting up to 5m0s for pod "pod-03c8bf8f-f38a-4fbb-8653-8978126c78fb" in namespace "emptydir-9218" to be "Succeeded or Failed" May 6 21:24:09.431: INFO: Pod "pod-03c8bf8f-f38a-4fbb-8653-8978126c78fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100129ms May 6 21:24:11.503: INFO: Pod "pod-03c8bf8f-f38a-4fbb-8653-8978126c78fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075958233s May 6 21:24:13.506: INFO: Pod "pod-03c8bf8f-f38a-4fbb-8653-8978126c78fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079474768s May 6 21:24:15.509: INFO: Pod "pod-03c8bf8f-f38a-4fbb-8653-8978126c78fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.082178377s STEP: Saw pod success May 6 21:24:15.509: INFO: Pod "pod-03c8bf8f-f38a-4fbb-8653-8978126c78fb" satisfied condition "Succeeded or Failed" May 6 21:24:15.510: INFO: Trying to get logs from node latest-worker2 pod pod-03c8bf8f-f38a-4fbb-8653-8978126c78fb container test-container: STEP: delete the pod May 6 21:24:15.788: INFO: Waiting for pod pod-03c8bf8f-f38a-4fbb-8653-8978126c78fb to disappear May 6 21:24:15.839: INFO: Pod pod-03c8bf8f-f38a-4fbb-8653-8978126c78fb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:24:15.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9218" for this suite. • [SLOW TEST:6.660 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":281,"skipped":4722,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:24:15.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy May 6 21:24:16.264: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix230195271/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:24:16.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2039" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":288,"completed":282,"skipped":4725,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:24:16.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:24:22.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3019" for this suite. • [SLOW TEST:5.731 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":288,"completed":283,"skipped":4732,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:24:22.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:24:55.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4262" for this suite. • [SLOW TEST:33.700 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":288,"completed":284,"skipped":4744,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:24:55.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:25:16.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3468" for this suite. • [SLOW TEST:20.272 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":288,"completed":285,"skipped":4754,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:25:16.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 6 21:25:22.341: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-3779 PodName:var-expansion-04d30e79-5809-4eba-9263-dfc84fb0e5d9 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 21:25:22.341: INFO: >>> kubeConfig: /root/.kube/config I0506 21:25:22.445602 7 log.go:172] (0xc00242b130) (0xc001bac1e0) Create stream I0506 21:25:22.445715 7 log.go:172] (0xc00242b130) (0xc001bac1e0) Stream added, broadcasting: 1 I0506 21:25:22.447896 7 log.go:172] (0xc00242b130) Reply frame received for 1 I0506 21:25:22.447927 7 log.go:172] (0xc00242b130) (0xc00138afa0) Create stream I0506 21:25:22.447937 7 log.go:172] (0xc00242b130) (0xc00138afa0) Stream added, broadcasting: 3 I0506 21:25:22.448711 7 log.go:172] (0xc00242b130) Reply frame received for 3 I0506 21:25:22.448754 7 log.go:172] (0xc00242b130) (0xc001eb05a0) Create stream I0506 21:25:22.448769 7 log.go:172] (0xc00242b130) (0xc001eb05a0) Stream added, broadcasting: 5 I0506 21:25:22.449673 7 log.go:172] (0xc00242b130) Reply frame received for 5 I0506 21:25:22.515073 7 log.go:172] (0xc00242b130) Data frame received for 5 I0506 21:25:22.515107 7 log.go:172] (0xc001eb05a0) (5) Data frame handling I0506 21:25:22.515131 7 log.go:172] (0xc00242b130) Data frame received for 3 I0506 21:25:22.515145 7 log.go:172] (0xc00138afa0) (3) Data frame handling I0506 21:25:22.516362 7 log.go:172] (0xc00242b130) Data frame received for 1 I0506 21:25:22.516387 7 log.go:172] (0xc001bac1e0) (1) Data frame handling I0506 21:25:22.516418 7 log.go:172] (0xc001bac1e0) (1) Data frame sent I0506 21:25:22.516437 7 log.go:172] (0xc00242b130) (0xc001bac1e0) Stream removed, broadcasting: 1 I0506 21:25:22.516461 7 log.go:172] (0xc00242b130) Go away received I0506 21:25:22.516600 7 log.go:172] (0xc00242b130) (0xc001bac1e0) Stream removed, broadcasting: 1 I0506 21:25:22.516622 7 log.go:172] (0xc00242b130) (0xc00138afa0) Stream removed, broadcasting: 3 I0506 21:25:22.516641 7 log.go:172] (0xc00242b130) (0xc001eb05a0) Stream removed, broadcasting: 5 STEP: test for file in mounted path May 6 21:25:22.520: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-3779 PodName:var-expansion-04d30e79-5809-4eba-9263-dfc84fb0e5d9 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 21:25:22.520: INFO: >>> kubeConfig: /root/.kube/config I0506 21:25:22.554886 7 log.go:172] (0xc002f729a0) (0xc00138ba40) Create stream I0506 21:25:22.554912 7 log.go:172] (0xc002f729a0) (0xc00138ba40) Stream added, broadcasting: 1 I0506 21:25:22.557386 7 log.go:172] (0xc002f729a0) Reply frame received for 1 I0506 21:25:22.557411 7 log.go:172] (0xc002f729a0) (0xc00138bc20) Create stream I0506 21:25:22.557419 7 log.go:172] (0xc002f729a0) (0xc00138bc20) Stream added, broadcasting: 3 I0506 21:25:22.558388 7 log.go:172] (0xc002f729a0) Reply frame received for 3 I0506 21:25:22.558423 7 log.go:172] (0xc002f729a0) (0xc001bac280) Create stream I0506 21:25:22.558436 7 log.go:172] (0xc002f729a0) (0xc001bac280) Stream added, broadcasting: 5 I0506 21:25:22.559336 7 log.go:172] (0xc002f729a0) Reply frame received for 5 I0506 21:25:22.629570 7 log.go:172] (0xc002f729a0) Data frame received for 5 I0506 21:25:22.629620 7 log.go:172] (0xc001bac280) (5) Data frame handling I0506 21:25:22.629811 7 log.go:172] (0xc002f729a0) Data frame received for 3 I0506 21:25:22.629851 7 log.go:172] (0xc00138bc20) (3) Data frame handling I0506 21:25:22.631455 7 log.go:172] (0xc002f729a0) Data frame received for 1 I0506 21:25:22.631482 7 log.go:172] (0xc00138ba40) (1) Data frame handling I0506 21:25:22.631512 7 log.go:172] (0xc00138ba40) (1) Data frame sent I0506 21:25:22.631534 7 log.go:172] (0xc002f729a0) (0xc00138ba40) Stream removed, broadcasting: 1 I0506 21:25:22.631571 7 log.go:172] (0xc002f729a0) Go away received I0506 21:25:22.631671 7 log.go:172] (0xc002f729a0) (0xc00138ba40) Stream removed, broadcasting: 1 I0506 21:25:22.631694 7 log.go:172] (0xc002f729a0) (0xc00138bc20) Stream removed, broadcasting: 3 I0506 21:25:22.631703 7 log.go:172] (0xc002f729a0) (0xc001bac280) Stream removed, broadcasting: 5 STEP: updating the annotation value May 6 21:25:23.143: INFO: Successfully updated pod "var-expansion-04d30e79-5809-4eba-9263-dfc84fb0e5d9" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 6 21:25:23.169: INFO: Deleting pod "var-expansion-04d30e79-5809-4eba-9263-dfc84fb0e5d9" in namespace "var-expansion-3779" May 6 21:25:23.174: INFO: Wait up to 5m0s for pod "var-expansion-04d30e79-5809-4eba-9263-dfc84fb0e5d9" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:25:57.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3779" for this suite. • [SLOW TEST:41.079 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":288,"completed":286,"skipped":4774,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:25:57.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 6 21:26:01.834: INFO: Successfully updated pod "annotationupdate2554a4a2-535b-452b-b3af-bdf97eee487d" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:26:04.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1213" for this suite. • [SLOW TEST:6.888 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":287,"skipped":4789,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 6 21:26:04.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 6 21:26:04.174: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85540d38-efa1-4273-ac92-55541d4732da" in namespace "projected-3494" to be "Succeeded or Failed" May 6 21:26:04.177: INFO: Pod "downwardapi-volume-85540d38-efa1-4273-ac92-55541d4732da": Phase="Pending", Reason="", readiness=false. Elapsed: 3.281964ms May 6 21:26:06.217: INFO: Pod "downwardapi-volume-85540d38-efa1-4273-ac92-55541d4732da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043291154s May 6 21:26:08.226: INFO: Pod "downwardapi-volume-85540d38-efa1-4273-ac92-55541d4732da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051842128s STEP: Saw pod success May 6 21:26:08.226: INFO: Pod "downwardapi-volume-85540d38-efa1-4273-ac92-55541d4732da" satisfied condition "Succeeded or Failed" May 6 21:26:08.228: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-85540d38-efa1-4273-ac92-55541d4732da container client-container: STEP: delete the pod May 6 21:26:08.250: INFO: Waiting for pod downwardapi-volume-85540d38-efa1-4273-ac92-55541d4732da to disappear May 6 21:26:08.432: INFO: Pod downwardapi-volume-85540d38-efa1-4273-ac92-55541d4732da no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 6 21:26:08.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3494" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":288,"skipped":4792,"failed":0} SSSSSSSSSSSSSSMay 6 21:26:08.442: INFO: Running AfterSuite actions on all nodes May 6 21:26:08.442: INFO: Running AfterSuite actions on node 1 May 6 21:26:08.442: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":288,"completed":288,"skipped":4806,"failed":0} Ran 288 of 5094 Specs in 6274.627 seconds SUCCESS! -- 288 Passed | 0 Failed | 0 Pending | 4806 Skipped PASS