I0520 10:46:53.825003 7 e2e.go:224] Starting e2e run "37a9bd49-9a87-11ea-b520-0242ac110018" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589971613 - Will randomize all specs Will run 201 of 2164 specs May 20 10:46:54.023: INFO: >>> kubeConfig: /root/.kube/config May 20 10:46:54.027: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 20 10:46:54.041: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 20 10:46:54.073: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 20 10:46:54.073: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 20 10:46:54.073: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 20 10:46:54.079: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 20 10:46:54.079: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 20 10:46:54.079: INFO: e2e test version: v1.13.12 May 20 10:46:54.080: INFO: kube-apiserver version: v1.13.12 SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 10:46:54.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook May 20 10:46:54.178: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 20 10:47:02.235: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 20 10:47:02.264: INFO: Pod pod-with-poststart-http-hook still exists May 20 10:47:04.265: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 20 10:47:04.391: INFO: Pod pod-with-poststart-http-hook still exists May 20 10:47:06.265: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 20 10:47:06.269: INFO: Pod pod-with-poststart-http-hook still exists May 20 10:47:08.265: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 20 10:47:08.270: INFO: Pod pod-with-poststart-http-hook still exists May 20 10:47:10.265: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 20 10:47:10.277: INFO: Pod pod-with-poststart-http-hook still exists May 20 10:47:12.265: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 20 10:47:12.280: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 10:47:12.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-fvrn7" for this suite. May 20 10:47:34.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 10:47:34.384: INFO: namespace: e2e-tests-container-lifecycle-hook-fvrn7, resource: bindings, ignored listing per whitelist May 20 10:47:34.426: INFO: namespace e2e-tests-container-lifecycle-hook-fvrn7 deletion completed in 22.142417522s • [SLOW TEST:40.346 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 10:47:34.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 20 10:47:34.558: INFO: Waiting up to 5m0s for pod "pod-5037ab79-9a87-11ea-b520-0242ac110018" in namespace "e2e-tests-emptydir-mcz2v" to be "success or failure" May 20 10:47:34.561: INFO: Pod "pod-5037ab79-9a87-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.817669ms May 20 10:47:36.565: INFO: Pod "pod-5037ab79-9a87-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007417787s May 20 10:47:38.570: INFO: Pod "pod-5037ab79-9a87-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011963584s STEP: Saw pod success May 20 10:47:38.570: INFO: Pod "pod-5037ab79-9a87-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 10:47:38.573: INFO: Trying to get logs from node hunter-worker2 pod pod-5037ab79-9a87-11ea-b520-0242ac110018 container test-container: STEP: delete the pod May 20 10:47:38.618: INFO: Waiting for pod pod-5037ab79-9a87-11ea-b520-0242ac110018 to disappear May 20 10:47:38.627: INFO: Pod pod-5037ab79-9a87-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 10:47:38.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mcz2v" for this suite. May 20 10:47:44.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 10:47:44.763: INFO: namespace: e2e-tests-emptydir-mcz2v, resource: bindings, ignored listing per whitelist May 20 10:47:44.859: INFO: namespace e2e-tests-emptydir-mcz2v deletion completed in 6.228505954s • [SLOW TEST:10.433 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 10:47:44.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 20 10:47:44.953: INFO: Waiting up to 5m0s for pod "pod-566c46b4-9a87-11ea-b520-0242ac110018" in namespace "e2e-tests-emptydir-8mcsk" to be "success or failure" May 20 10:47:44.984: INFO: Pod "pod-566c46b4-9a87-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 30.880989ms May 20 10:47:47.061: INFO: Pod "pod-566c46b4-9a87-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108468102s May 20 10:47:49.066: INFO: Pod "pod-566c46b4-9a87-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.113234153s May 20 10:47:51.070: INFO: Pod "pod-566c46b4-9a87-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117462214s STEP: Saw pod success May 20 10:47:51.070: INFO: Pod "pod-566c46b4-9a87-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 10:47:51.073: INFO: Trying to get logs from node hunter-worker2 pod pod-566c46b4-9a87-11ea-b520-0242ac110018 container test-container: STEP: delete the pod May 20 10:47:51.092: INFO: Waiting for pod pod-566c46b4-9a87-11ea-b520-0242ac110018 to disappear May 20 10:47:51.096: INFO: Pod pod-566c46b4-9a87-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 10:47:51.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8mcsk" for this suite. May 20 10:47:57.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 10:47:57.176: INFO: namespace: e2e-tests-emptydir-8mcsk, resource: bindings, ignored listing per whitelist May 20 10:47:57.223: INFO: namespace e2e-tests-emptydir-8mcsk deletion completed in 6.123804174s • [SLOW TEST:12.363 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 10:47:57.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command May 20 10:47:57.315: INFO: Waiting up to 5m0s for pod "client-containers-5dca290f-9a87-11ea-b520-0242ac110018" in namespace "e2e-tests-containers-nbkjq" to be "success or failure" May 20 10:47:57.318: INFO: Pod "client-containers-5dca290f-9a87-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.217296ms May 20 10:47:59.343: INFO: Pod "client-containers-5dca290f-9a87-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027995917s May 20 10:48:01.347: INFO: Pod "client-containers-5dca290f-9a87-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031811714s STEP: Saw pod success May 20 10:48:01.347: INFO: Pod "client-containers-5dca290f-9a87-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 10:48:01.350: INFO: Trying to get logs from node hunter-worker2 pod client-containers-5dca290f-9a87-11ea-b520-0242ac110018 container test-container: STEP: delete the pod May 20 10:48:01.367: INFO: Waiting for pod client-containers-5dca290f-9a87-11ea-b520-0242ac110018 to disappear May 20 10:48:01.396: INFO: Pod client-containers-5dca290f-9a87-11ea-b520-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 10:48:01.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-nbkjq" for this suite. May 20 10:48:07.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 10:48:07.780: INFO: namespace: e2e-tests-containers-nbkjq, resource: bindings, ignored listing per whitelist May 20 10:48:07.787: INFO: namespace e2e-tests-containers-nbkjq deletion completed in 6.387360434s • [SLOW TEST:10.564 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 10:48:07.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 20 10:48:07.945: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 20 10:48:12.949: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 20 10:48:12.949: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 20 10:48:14.971: INFO: Creating deployment "test-rollover-deployment" May 20 10:48:14.981: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 20 10:48:16.986: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 20 10:48:16.991: INFO: Ensure that both replica sets have 1 created replica May 20 10:48:16.995: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 20 10:48:17.000: INFO: Updating deployment test-rollover-deployment May 20 10:48:17.000: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 20 10:48:19.068: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 20 10:48:19.074: INFO: Make sure deployment "test-rollover-deployment" is complete May 20 10:48:19.079: INFO: all replica sets need to contain the pod-template-hash label May 20 10:48:19.079: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568495, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568495, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568497, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568494, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 10:48:21.088: INFO: all replica sets need to contain the pod-template-hash label May 20 10:48:21.088: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568495, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568495, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568500, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568494, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 10:48:23.086: INFO: all replica sets need to contain the pod-template-hash label May 20 10:48:23.087: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568495, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568495, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568500, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568494, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 10:48:25.087: INFO: all replica sets need to contain the pod-template-hash label May 20 10:48:25.087: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568495, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568495, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568500, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568494, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 10:48:27.090: INFO: all replica sets need to contain the pod-template-hash label May 20 10:48:27.090: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568495, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568495, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568500, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568494, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 10:48:29.087: INFO: all replica sets need to contain the pod-template-hash label May 20 10:48:29.087: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568495, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568495, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568500, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725568494, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 10:48:31.239: INFO: May 20 10:48:31.239: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 20 10:48:31.245: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-kb2qt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kb2qt/deployments/test-rollover-deployment,UID:6852a024-9a87-11ea-99e8-0242ac110002,ResourceVersion:11558495,Generation:2,CreationTimestamp:2020-05-20 10:48:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-20 10:48:15 +0000 UTC 2020-05-20 10:48:15 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-20 10:48:30 +0000 UTC 2020-05-20 10:48:14 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 20 10:48:31.248: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-kb2qt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kb2qt/replicasets/test-rollover-deployment-5b8479fdb6,UID:69882ad4-9a87-11ea-99e8-0242ac110002,ResourceVersion:11558486,Generation:2,CreationTimestamp:2020-05-20 10:48:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6852a024-9a87-11ea-99e8-0242ac110002 0xc0014d0a37 0xc0014d0a38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 20 10:48:31.248: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 20 10:48:31.248: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-kb2qt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kb2qt/replicasets/test-rollover-controller,UID:64217839-9a87-11ea-99e8-0242ac110002,ResourceVersion:11558494,Generation:2,CreationTimestamp:2020-05-20 10:48:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6852a024-9a87-11ea-99e8-0242ac110002 0xc0014d0897 0xc0014d0898}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 20 10:48:31.248: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-kb2qt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kb2qt/replicasets/test-rollover-deployment-58494b7559,UID:6854e0f8-9a87-11ea-99e8-0242ac110002,ResourceVersion:11558451,Generation:2,CreationTimestamp:2020-05-20 10:48:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6852a024-9a87-11ea-99e8-0242ac110002 0xc0014d0957 0xc0014d0958}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 20 10:48:31.251: INFO: Pod "test-rollover-deployment-5b8479fdb6-5kb8m" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-5kb8m,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-kb2qt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kb2qt/pods/test-rollover-deployment-5b8479fdb6-5kb8m,UID:699d5b82-9a87-11ea-99e8-0242ac110002,ResourceVersion:11558464,Generation:0,CreationTimestamp:2020-05-20 10:48:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 69882ad4-9a87-11ea-99e8-0242ac110002 0xc001a0f677 0xc001a0f678}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p2s72 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p2s72,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-p2s72 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a0f760} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a0f780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 10:48:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 10:48:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 10:48:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 10:48:17 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.39,StartTime:2020-05-20 10:48:17 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-20 10:48:20 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://a1d39fa8bb30f6c165985515f3296e307877a5e902c67e037691bee1deba294d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 10:48:31.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-kb2qt" for this suite. May 20 10:48:39.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 10:48:39.380: INFO: namespace: e2e-tests-deployment-kb2qt, resource: bindings, ignored listing per whitelist May 20 10:48:39.384: INFO: namespace e2e-tests-deployment-kb2qt deletion completed in 8.1298412s • [SLOW TEST:31.597 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 10:48:39.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode May 20 10:48:39.604: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-jf7qs" to be "success or failure" May 20 10:48:39.613: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.24001ms May 20 10:48:41.703: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099484007s May 20 10:48:43.726: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.121894373s May 20 10:48:45.730: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.12604441s STEP: Saw pod success May 20 10:48:45.730: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 20 10:48:45.733: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 20 10:48:45.752: INFO: Waiting for pod pod-host-path-test to disappear May 20 10:48:45.757: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 10:48:45.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-jf7qs" for this suite. May 20 10:48:51.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 10:48:51.925: INFO: namespace: e2e-tests-hostpath-jf7qs, resource: bindings, ignored listing per whitelist May 20 10:48:51.948: INFO: namespace e2e-tests-hostpath-jf7qs deletion completed in 6.187972467s • [SLOW TEST:12.564 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 10:48:51.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 10:48:56.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-bt6q5" for this suite. May 20 10:49:02.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 10:49:02.173: INFO: namespace: e2e-tests-kubelet-test-bt6q5, resource: bindings, ignored listing per whitelist May 20 10:49:02.225: INFO: namespace e2e-tests-kubelet-test-bt6q5 deletion completed in 6.110182979s • [SLOW TEST:10.277 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 10:49:02.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 20 10:49:06.394: INFO: Waiting up to 5m0s for pod "client-envvars-86f6db80-9a87-11ea-b520-0242ac110018" in namespace "e2e-tests-pods-qgq5m" to be "success or failure" May 20 10:49:06.405: INFO: Pod "client-envvars-86f6db80-9a87-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 11.204509ms May 20 10:49:08.409: INFO: Pod "client-envvars-86f6db80-9a87-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01541205s May 20 10:49:10.414: INFO: Pod "client-envvars-86f6db80-9a87-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020215288s STEP: Saw pod success May 20 10:49:10.414: INFO: Pod "client-envvars-86f6db80-9a87-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 10:49:10.418: INFO: Trying to get logs from node hunter-worker pod client-envvars-86f6db80-9a87-11ea-b520-0242ac110018 container env3cont: STEP: delete the pod May 20 10:49:10.436: INFO: Waiting for pod client-envvars-86f6db80-9a87-11ea-b520-0242ac110018 to disappear May 20 10:49:10.476: INFO: Pod client-envvars-86f6db80-9a87-11ea-b520-0242ac110018 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 10:49:10.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-qgq5m" for this suite. May 20 10:49:50.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 10:49:50.559: INFO: namespace: e2e-tests-pods-qgq5m, resource: bindings, ignored listing per whitelist May 20 10:49:50.559: INFO: namespace e2e-tests-pods-qgq5m deletion completed in 40.078866195s • [SLOW TEST:48.334 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 10:49:50.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 20 10:49:50.659: INFO: Waiting up to 5m0s for pod "downward-api-a157666a-9a87-11ea-b520-0242ac110018" in namespace "e2e-tests-downward-api-kfvx7" to be "success or failure" May 20 10:49:50.674: INFO: Pod "downward-api-a157666a-9a87-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.399432ms May 20 10:49:52.835: INFO: Pod "downward-api-a157666a-9a87-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176461635s May 20 10:49:54.839: INFO: Pod "downward-api-a157666a-9a87-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.17961625s STEP: Saw pod success May 20 10:49:54.839: INFO: Pod "downward-api-a157666a-9a87-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 10:49:54.841: INFO: Trying to get logs from node hunter-worker2 pod downward-api-a157666a-9a87-11ea-b520-0242ac110018 container dapi-container: STEP: delete the pod May 20 10:49:55.011: INFO: Waiting for pod downward-api-a157666a-9a87-11ea-b520-0242ac110018 to disappear May 20 10:49:55.028: INFO: Pod downward-api-a157666a-9a87-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 10:49:55.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-kfvx7" for this suite. May 20 10:50:01.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 10:50:01.058: INFO: namespace: e2e-tests-downward-api-kfvx7, resource: bindings, ignored listing per whitelist May 20 10:50:01.129: INFO: namespace e2e-tests-downward-api-kfvx7 deletion completed in 6.098603235s • [SLOW TEST:10.570 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 10:50:01.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 10:50:07.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-4vcqz" for this suite. May 20 10:50:53.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 10:50:53.386: INFO: namespace: e2e-tests-kubelet-test-4vcqz, resource: bindings, ignored listing per whitelist May 20 10:50:53.415: INFO: namespace e2e-tests-kubelet-test-4vcqz deletion completed in 46.146113569s • [SLOW TEST:52.286 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 10:50:53.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 10:50:54.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-m85rc" for this suite. May 20 10:51:00.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 10:51:00.907: INFO: namespace: e2e-tests-kubelet-test-m85rc, resource: bindings, ignored listing per whitelist May 20 10:51:01.085: INFO: namespace e2e-tests-kubelet-test-m85rc deletion completed in 6.517110162s • [SLOW TEST:7.669 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 10:51:01.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 20 10:51:01.311: INFO: Pod name pod-release: Found 0 pods out of 1 May 20 10:51:06.314: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 10:51:07.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-vlmk2" for this suite. May 20 10:51:22.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 10:51:22.296: INFO: namespace: e2e-tests-replication-controller-vlmk2, resource: bindings, ignored listing per whitelist May 20 10:51:22.356: INFO: namespace e2e-tests-replication-controller-vlmk2 deletion completed in 14.570232884s • [SLOW TEST:21.271 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 10:51:22.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-d8130384-9a87-11ea-b520-0242ac110018 STEP: Creating a pod to test consume secrets May 20 10:51:22.480: INFO: Waiting up to 5m0s for pod "pod-secrets-d814e0b8-9a87-11ea-b520-0242ac110018" in namespace "e2e-tests-secrets-2nczq" to be "success or failure" May 20 10:51:22.484: INFO: Pod "pod-secrets-d814e0b8-9a87-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.818167ms May 20 10:51:24.488: INFO: Pod "pod-secrets-d814e0b8-9a87-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007819818s May 20 10:51:26.492: INFO: Pod "pod-secrets-d814e0b8-9a87-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.011106208s May 20 10:51:28.496: INFO: Pod "pod-secrets-d814e0b8-9a87-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015255469s STEP: Saw pod success May 20 10:51:28.496: INFO: Pod "pod-secrets-d814e0b8-9a87-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 10:51:28.498: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-d814e0b8-9a87-11ea-b520-0242ac110018 container secret-volume-test: STEP: delete the pod May 20 10:51:28.516: INFO: Waiting for pod pod-secrets-d814e0b8-9a87-11ea-b520-0242ac110018 to disappear May 20 10:51:28.520: INFO: Pod pod-secrets-d814e0b8-9a87-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 10:51:28.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-2nczq" for this suite. May 20 10:51:34.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 10:51:34.602: INFO: namespace: e2e-tests-secrets-2nczq, resource: bindings, ignored listing per whitelist May 20 10:51:34.611: INFO: namespace e2e-tests-secrets-2nczq deletion completed in 6.088206822s • [SLOW TEST:12.255 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 10:51:34.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions May 20 10:51:34.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 20 10:51:35.128: INFO: stderr: "" May 20 10:51:35.128: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 10:51:35.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jlh45" for this suite. May 20 10:51:41.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 10:51:41.406: INFO: namespace: e2e-tests-kubectl-jlh45, resource: bindings, ignored listing per whitelist May 20 10:51:41.413: INFO: namespace e2e-tests-kubectl-jlh45 deletion completed in 6.280703817s • [SLOW TEST:6.802 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 10:51:41.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-e37e2819-9a87-11ea-b520-0242ac110018 STEP: Creating a pod to test consume configMaps May 20 10:51:41.638: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e3800345-9a87-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-s9gn9" to be "success or failure" May 20 10:51:41.653: INFO: Pod "pod-projected-configmaps-e3800345-9a87-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.556914ms May 20 10:51:43.657: INFO: Pod "pod-projected-configmaps-e3800345-9a87-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018793881s May 20 10:51:45.661: INFO: Pod "pod-projected-configmaps-e3800345-9a87-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.023250423s May 20 10:51:47.665: INFO: Pod "pod-projected-configmaps-e3800345-9a87-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02749265s STEP: Saw pod success May 20 10:51:47.665: INFO: Pod "pod-projected-configmaps-e3800345-9a87-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 10:51:47.668: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-e3800345-9a87-11ea-b520-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 20 10:51:47.701: INFO: Waiting for pod pod-projected-configmaps-e3800345-9a87-11ea-b520-0242ac110018 to disappear May 20 10:51:47.707: INFO: Pod pod-projected-configmaps-e3800345-9a87-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 10:51:47.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-s9gn9" for this suite. May 20 10:51:53.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 10:51:53.746: INFO: namespace: e2e-tests-projected-s9gn9, resource: bindings, ignored listing per whitelist May 20 10:51:53.804: INFO: namespace e2e-tests-projected-s9gn9 deletion completed in 6.092842017s • [SLOW TEST:12.390 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 10:51:53.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-tg8cq STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-tg8cq to expose endpoints map[] May 20 10:51:54.012: INFO: Get endpoints failed (11.26805ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 20 10:51:55.017: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-tg8cq exposes endpoints map[] (1.015532391s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-tg8cq STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-tg8cq to expose endpoints map[pod1:[100]] May 20 10:51:59.122: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.098589899s elapsed, will retry) May 20 10:52:00.127: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-tg8cq exposes endpoints map[pod1:[100]] (5.104195079s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-tg8cq STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-tg8cq to expose endpoints map[pod2:[101] pod1:[100]] May 20 10:52:03.643: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-tg8cq exposes endpoints map[pod2:[101] pod1:[100]] (3.51117651s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-tg8cq STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-tg8cq to expose endpoints map[pod2:[101]] May 20 10:52:04.786: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-tg8cq exposes endpoints map[pod2:[101]] (1.138797279s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-tg8cq STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-tg8cq to expose endpoints map[] May 20 10:52:05.898: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-tg8cq exposes endpoints map[] (1.106867141s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 10:52:06.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-tg8cq" for this suite. May 20 10:52:28.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 10:52:28.331: INFO: namespace: e2e-tests-services-tg8cq, resource: bindings, ignored listing per whitelist May 20 10:52:28.404: INFO: namespace e2e-tests-services-tg8cq deletion completed in 22.102821341s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:34.600 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 10:52:28.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 20 10:52:28.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fjcz2' May 20 10:52:30.828: INFO: stderr: "" May 20 10:52:30.828: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 20 10:52:30.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fjcz2' May 20 10:52:30.959: INFO: stderr: "" May 20 10:52:30.959: INFO: stdout: "update-demo-nautilus-7dp8j update-demo-nautilus-hfj8q " May 20 10:52:30.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7dp8j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fjcz2' May 20 10:52:31.051: INFO: stderr: "" May 20 10:52:31.051: INFO: stdout: "" May 20 10:52:31.051: INFO: update-demo-nautilus-7dp8j is created but not running May 20 10:52:36.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fjcz2' May 20 10:52:36.155: INFO: stderr: "" May 20 10:52:36.155: INFO: stdout: "update-demo-nautilus-7dp8j update-demo-nautilus-hfj8q " May 20 10:52:36.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7dp8j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fjcz2' May 20 10:52:36.252: INFO: stderr: "" May 20 10:52:36.252: INFO: stdout: "true" May 20 10:52:36.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7dp8j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fjcz2' May 20 10:52:36.368: INFO: stderr: "" May 20 10:52:36.368: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 10:52:36.368: INFO: validating pod update-demo-nautilus-7dp8j May 20 10:52:36.371: INFO: got data: { "image": "nautilus.jpg" } May 20 10:52:36.371: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 10:52:36.371: INFO: update-demo-nautilus-7dp8j is verified up and running May 20 10:52:36.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hfj8q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fjcz2' May 20 10:52:36.475: INFO: stderr: "" May 20 10:52:36.475: INFO: stdout: "true" May 20 10:52:36.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hfj8q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fjcz2' May 20 10:52:36.572: INFO: stderr: "" May 20 10:52:36.572: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 10:52:36.572: INFO: validating pod update-demo-nautilus-hfj8q May 20 10:52:36.577: INFO: got data: { "image": "nautilus.jpg" } May 20 10:52:36.577: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 10:52:36.577: INFO: update-demo-nautilus-hfj8q is verified up and running STEP: using delete to clean up resources May 20 10:52:36.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fjcz2' May 20 10:52:36.683: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 10:52:36.683: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 20 10:52:36.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-fjcz2' May 20 10:52:36.805: INFO: stderr: "No resources found.\n" May 20 10:52:36.805: INFO: stdout: "" May 20 10:52:36.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-fjcz2 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 20 10:52:36.918: INFO: stderr: "" May 20 10:52:36.918: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 10:52:36.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fjcz2" for this suite. May 20 10:52:58.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 10:52:58.982: INFO: namespace: e2e-tests-kubectl-fjcz2, resource: bindings, ignored listing per whitelist May 20 10:52:59.019: INFO: namespace e2e-tests-kubectl-fjcz2 deletion completed in 22.096241438s • [SLOW TEST:30.615 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 10:52:59.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 20 10:52:59.153: INFO: Waiting up to 5m0s for pod "pod-11b435ad-9a88-11ea-b520-0242ac110018" in namespace "e2e-tests-emptydir-dhwgc" to be "success or failure" May 20 10:52:59.170: INFO: Pod "pod-11b435ad-9a88-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.868255ms May 20 10:53:01.174: INFO: Pod "pod-11b435ad-9a88-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020646001s May 20 10:53:03.178: INFO: Pod "pod-11b435ad-9a88-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.024720693s May 20 10:53:05.182: INFO: Pod "pod-11b435ad-9a88-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029034714s STEP: Saw pod success May 20 10:53:05.182: INFO: Pod "pod-11b435ad-9a88-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 10:53:05.185: INFO: Trying to get logs from node hunter-worker pod pod-11b435ad-9a88-11ea-b520-0242ac110018 container test-container: STEP: delete the pod May 20 10:53:05.236: INFO: Waiting for pod pod-11b435ad-9a88-11ea-b520-0242ac110018 to disappear May 20 10:53:05.258: INFO: Pod pod-11b435ad-9a88-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 10:53:05.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dhwgc" for this suite. May 20 10:53:11.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 10:53:11.294: INFO: namespace: e2e-tests-emptydir-dhwgc, resource: bindings, ignored listing per whitelist May 20 10:53:11.337: INFO: namespace e2e-tests-emptydir-dhwgc deletion completed in 6.074998013s • [SLOW TEST:12.317 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 10:53:11.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 20 10:53:12.048: INFO: Pod name wrapped-volume-race-19566f04-9a88-11ea-b520-0242ac110018: Found 0 pods out of 5 May 20 10:53:17.057: INFO: Pod name wrapped-volume-race-19566f04-9a88-11ea-b520-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-19566f04-9a88-11ea-b520-0242ac110018 in namespace e2e-tests-emptydir-wrapper-s7x2p, will wait for the garbage collector to delete the pods May 20 10:55:19.150: INFO: Deleting ReplicationController wrapped-volume-race-19566f04-9a88-11ea-b520-0242ac110018 took: 9.094871ms May 20 10:55:19.250: INFO: Terminating ReplicationController wrapped-volume-race-19566f04-9a88-11ea-b520-0242ac110018 pods took: 100.269998ms STEP: Creating RC which spawns configmap-volume pods May 20 10:56:01.503: INFO: Pod name wrapped-volume-race-7e5d78d2-9a88-11ea-b520-0242ac110018: Found 0 pods out of 5 May 20 10:56:06.512: INFO: Pod name wrapped-volume-race-7e5d78d2-9a88-11ea-b520-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7e5d78d2-9a88-11ea-b520-0242ac110018 in namespace e2e-tests-emptydir-wrapper-s7x2p, will wait for the garbage collector to delete the pods May 20 10:58:20.647: INFO: Deleting ReplicationController wrapped-volume-race-7e5d78d2-9a88-11ea-b520-0242ac110018 took: 9.330012ms May 20 10:58:20.847: INFO: Terminating ReplicationController wrapped-volume-race-7e5d78d2-9a88-11ea-b520-0242ac110018 pods took: 200.23302ms STEP: Creating RC which spawns configmap-volume pods May 20 10:58:58.180: INFO: Pod name wrapped-volume-race-e7af54f4-9a88-11ea-b520-0242ac110018: Found 0 pods out of 5 May 20 10:59:03.188: INFO: Pod name wrapped-volume-race-e7af54f4-9a88-11ea-b520-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e7af54f4-9a88-11ea-b520-0242ac110018 in namespace e2e-tests-emptydir-wrapper-s7x2p, will wait for the garbage collector to delete the pods May 20 11:01:35.307: INFO: Deleting ReplicationController wrapped-volume-race-e7af54f4-9a88-11ea-b520-0242ac110018 took: 7.481283ms May 20 11:01:35.407: INFO: Terminating ReplicationController wrapped-volume-race-e7af54f4-9a88-11ea-b520-0242ac110018 pods took: 100.255955ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:02:22.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-s7x2p" for this suite. May 20 11:02:30.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:02:30.775: INFO: namespace: e2e-tests-emptydir-wrapper-s7x2p, resource: bindings, ignored listing per whitelist May 20 11:02:30.783: INFO: namespace e2e-tests-emptydir-wrapper-s7x2p deletion completed in 8.121405734s • [SLOW TEST:559.446 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:02:30.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 20 11:02:30.896: INFO: Waiting up to 5m0s for pod "pod-667d417a-9a89-11ea-b520-0242ac110018" in namespace "e2e-tests-emptydir-kwrlf" to be "success or failure" May 20 11:02:30.901: INFO: Pod "pod-667d417a-9a89-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.127934ms May 20 11:02:32.937: INFO: Pod "pod-667d417a-9a89-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040843531s May 20 11:02:34.940: INFO: Pod "pod-667d417a-9a89-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04419791s STEP: Saw pod success May 20 11:02:34.940: INFO: Pod "pod-667d417a-9a89-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:02:34.942: INFO: Trying to get logs from node hunter-worker2 pod pod-667d417a-9a89-11ea-b520-0242ac110018 container test-container: STEP: delete the pod May 20 11:02:34.969: INFO: Waiting for pod pod-667d417a-9a89-11ea-b520-0242ac110018 to disappear May 20 11:02:34.978: INFO: Pod pod-667d417a-9a89-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:02:34.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kwrlf" for this suite. May 20 11:02:41.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:02:41.121: INFO: namespace: e2e-tests-emptydir-kwrlf, resource: bindings, ignored listing per whitelist May 20 11:02:41.121: INFO: namespace e2e-tests-emptydir-kwrlf deletion completed in 6.139840553s • [SLOW TEST:10.338 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:02:41.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 20 11:02:41.251: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:02:41.254: INFO: Number of nodes with available pods: 0 May 20 11:02:41.254: INFO: Node hunter-worker is running more than one daemon pod May 20 11:02:42.261: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:02:42.265: INFO: Number of nodes with available pods: 0 May 20 11:02:42.265: INFO: Node hunter-worker is running more than one daemon pod May 20 11:02:43.259: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:02:43.263: INFO: Number of nodes with available pods: 0 May 20 11:02:43.263: INFO: Node hunter-worker is running more than one daemon pod May 20 11:02:44.258: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:02:44.261: INFO: Number of nodes with available pods: 0 May 20 11:02:44.261: INFO: Node hunter-worker is running more than one daemon pod May 20 11:02:45.260: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:02:45.263: INFO: Number of nodes with available pods: 1 May 20 11:02:45.263: INFO: Node hunter-worker2 is running more than one daemon pod May 20 11:02:46.258: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:02:46.261: INFO: Number of nodes with available pods: 2 May 20 11:02:46.261: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 20 11:02:46.302: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:02:46.317: INFO: Number of nodes with available pods: 2 May 20 11:02:46.317: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6265w, will wait for the garbage collector to delete the pods May 20 11:02:47.488: INFO: Deleting DaemonSet.extensions daemon-set took: 31.066983ms May 20 11:02:47.588: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.199722ms May 20 11:03:01.302: INFO: Number of nodes with available pods: 0 May 20 11:03:01.302: INFO: Number of running nodes: 0, number of available pods: 0 May 20 11:03:01.305: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6265w/daemonsets","resourceVersion":"11561092"},"items":null} May 20 11:03:01.307: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6265w/pods","resourceVersion":"11561092"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:03:01.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-6265w" for this suite. May 20 11:03:07.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:03:07.342: INFO: namespace: e2e-tests-daemonsets-6265w, resource: bindings, ignored listing per whitelist May 20 11:03:07.446: INFO: namespace e2e-tests-daemonsets-6265w deletion completed in 6.127555245s • [SLOW TEST:26.325 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:03:07.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-7c5802a2-9a89-11ea-b520-0242ac110018 STEP: Creating a pod to test consume secrets May 20 11:03:07.587: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7c59b4fc-9a89-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-zmhdt" to be "success or failure" May 20 11:03:07.596: INFO: Pod "pod-projected-secrets-7c59b4fc-9a89-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.084893ms May 20 11:03:09.620: INFO: Pod "pod-projected-secrets-7c59b4fc-9a89-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033107592s May 20 11:03:11.624: INFO: Pod "pod-projected-secrets-7c59b4fc-9a89-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.037264081s May 20 11:03:13.629: INFO: Pod "pod-projected-secrets-7c59b4fc-9a89-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041407477s STEP: Saw pod success May 20 11:03:13.629: INFO: Pod "pod-projected-secrets-7c59b4fc-9a89-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:03:13.632: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-7c59b4fc-9a89-11ea-b520-0242ac110018 container secret-volume-test: STEP: delete the pod May 20 11:03:13.663: INFO: Waiting for pod pod-projected-secrets-7c59b4fc-9a89-11ea-b520-0242ac110018 to disappear May 20 11:03:13.667: INFO: Pod pod-projected-secrets-7c59b4fc-9a89-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:03:13.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zmhdt" for this suite. May 20 11:03:19.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:03:19.784: INFO: namespace: e2e-tests-projected-zmhdt, resource: bindings, ignored listing per whitelist May 20 11:03:19.797: INFO: namespace e2e-tests-projected-zmhdt deletion completed in 6.127445709s • [SLOW TEST:12.351 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:03:19.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-83b4eb9a-9a89-11ea-b520-0242ac110018 May 20 11:03:19.941: INFO: Pod name my-hostname-basic-83b4eb9a-9a89-11ea-b520-0242ac110018: Found 0 pods out of 1 May 20 11:03:24.945: INFO: Pod name my-hostname-basic-83b4eb9a-9a89-11ea-b520-0242ac110018: Found 1 pods out of 1 May 20 11:03:24.945: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-83b4eb9a-9a89-11ea-b520-0242ac110018" are running May 20 11:03:24.947: INFO: Pod "my-hostname-basic-83b4eb9a-9a89-11ea-b520-0242ac110018-2shqj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-20 11:03:20 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-20 11:03:22 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-20 11:03:22 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-20 11:03:19 +0000 UTC Reason: Message:}]) May 20 11:03:24.947: INFO: Trying to dial the pod May 20 11:03:29.967: INFO: Controller my-hostname-basic-83b4eb9a-9a89-11ea-b520-0242ac110018: Got expected result from replica 1 [my-hostname-basic-83b4eb9a-9a89-11ea-b520-0242ac110018-2shqj]: "my-hostname-basic-83b4eb9a-9a89-11ea-b520-0242ac110018-2shqj", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:03:29.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-zgxbl" for this suite. May 20 11:03:35.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:03:36.064: INFO: namespace: e2e-tests-replication-controller-zgxbl, resource: bindings, ignored listing per whitelist May 20 11:03:36.067: INFO: namespace e2e-tests-replication-controller-zgxbl deletion completed in 6.096093431s • [SLOW TEST:16.269 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:03:36.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 20 11:03:40.708: INFO: Successfully updated pod "pod-update-8d641a36-9a89-11ea-b520-0242ac110018" STEP: verifying the updated pod is in kubernetes May 20 11:03:40.731: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:03:40.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-vxlxn" for this suite. May 20 11:04:02.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:04:02.804: INFO: namespace: e2e-tests-pods-vxlxn, resource: bindings, ignored listing per whitelist May 20 11:04:02.864: INFO: namespace e2e-tests-pods-vxlxn deletion completed in 22.130567463s • [SLOW TEST:26.797 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:04:02.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 20 11:04:02.995: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d623f2f-9a89-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-9ss8z" to be "success or failure" May 20 11:04:02.999: INFO: Pod "downwardapi-volume-9d623f2f-9a89-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.621463ms May 20 11:04:05.017: INFO: Pod "downwardapi-volume-9d623f2f-9a89-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021939094s May 20 11:04:07.058: INFO: Pod "downwardapi-volume-9d623f2f-9a89-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062964357s STEP: Saw pod success May 20 11:04:07.058: INFO: Pod "downwardapi-volume-9d623f2f-9a89-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:04:07.060: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-9d623f2f-9a89-11ea-b520-0242ac110018 container client-container: STEP: delete the pod May 20 11:04:07.084: INFO: Waiting for pod downwardapi-volume-9d623f2f-9a89-11ea-b520-0242ac110018 to disappear May 20 11:04:07.100: INFO: Pod downwardapi-volume-9d623f2f-9a89-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:04:07.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9ss8z" for this suite. May 20 11:04:13.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:04:13.170: INFO: namespace: e2e-tests-projected-9ss8z, resource: bindings, ignored listing per whitelist May 20 11:04:13.200: INFO: namespace e2e-tests-projected-9ss8z deletion completed in 6.096256002s • [SLOW TEST:10.336 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:04:13.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:04:17.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-xk4rf" for this suite. May 20 11:05:03.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:05:03.479: INFO: namespace: e2e-tests-kubelet-test-xk4rf, resource: bindings, ignored listing per whitelist May 20 11:05:03.489: INFO: namespace e2e-tests-kubelet-test-xk4rf deletion completed in 46.178227769s • [SLOW TEST:50.289 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:05:03.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 20 11:05:03.679: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c18d0394-9a89-11ea-b520-0242ac110018" in namespace "e2e-tests-downward-api-lcxmh" to be "success or failure" May 20 11:05:03.724: INFO: Pod "downwardapi-volume-c18d0394-9a89-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 45.479618ms May 20 11:05:05.728: INFO: Pod "downwardapi-volume-c18d0394-9a89-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049365973s May 20 11:05:07.731: INFO: Pod "downwardapi-volume-c18d0394-9a89-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052456573s May 20 11:05:09.735: INFO: Pod "downwardapi-volume-c18d0394-9a89-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05610847s STEP: Saw pod success May 20 11:05:09.735: INFO: Pod "downwardapi-volume-c18d0394-9a89-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:05:09.737: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-c18d0394-9a89-11ea-b520-0242ac110018 container client-container: STEP: delete the pod May 20 11:05:09.815: INFO: Waiting for pod downwardapi-volume-c18d0394-9a89-11ea-b520-0242ac110018 to disappear May 20 11:05:09.933: INFO: Pod downwardapi-volume-c18d0394-9a89-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:05:09.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-lcxmh" for this suite. May 20 11:05:15.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:05:16.001: INFO: namespace: e2e-tests-downward-api-lcxmh, resource: bindings, ignored listing per whitelist May 20 11:05:16.069: INFO: namespace e2e-tests-downward-api-lcxmh deletion completed in 6.132645685s • [SLOW TEST:12.580 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:05:16.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-c8fe53b6-9a89-11ea-b520-0242ac110018 STEP: Creating a pod to test consume secrets May 20 11:05:16.170: INFO: Waiting up to 5m0s for pod "pod-secrets-c8ffd319-9a89-11ea-b520-0242ac110018" in namespace "e2e-tests-secrets-tqd48" to be "success or failure" May 20 11:05:16.188: INFO: Pod "pod-secrets-c8ffd319-9a89-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.843971ms May 20 11:05:18.394: INFO: Pod "pod-secrets-c8ffd319-9a89-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224459133s May 20 11:05:20.399: INFO: Pod "pod-secrets-c8ffd319-9a89-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.228650045s STEP: Saw pod success May 20 11:05:20.399: INFO: Pod "pod-secrets-c8ffd319-9a89-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:05:20.402: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-c8ffd319-9a89-11ea-b520-0242ac110018 container secret-volume-test: STEP: delete the pod May 20 11:05:20.468: INFO: Waiting for pod pod-secrets-c8ffd319-9a89-11ea-b520-0242ac110018 to disappear May 20 11:05:20.580: INFO: Pod pod-secrets-c8ffd319-9a89-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:05:20.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tqd48" for this suite. May 20 11:05:26.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:05:26.623: INFO: namespace: e2e-tests-secrets-tqd48, resource: bindings, ignored listing per whitelist May 20 11:05:26.675: INFO: namespace e2e-tests-secrets-tqd48 deletion completed in 6.090814611s • [SLOW TEST:10.605 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:05:26.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 20 11:05:26.785: INFO: Waiting up to 5m0s for pod "downward-api-cf5243a9-9a89-11ea-b520-0242ac110018" in namespace "e2e-tests-downward-api-vftqz" to be "success or failure" May 20 11:05:26.797: INFO: Pod "downward-api-cf5243a9-9a89-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.021141ms May 20 11:05:28.879: INFO: Pod "downward-api-cf5243a9-9a89-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093977719s May 20 11:05:30.942: INFO: Pod "downward-api-cf5243a9-9a89-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156808109s May 20 11:05:32.946: INFO: Pod "downward-api-cf5243a9-9a89-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.160906217s STEP: Saw pod success May 20 11:05:32.946: INFO: Pod "downward-api-cf5243a9-9a89-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:05:32.950: INFO: Trying to get logs from node hunter-worker2 pod downward-api-cf5243a9-9a89-11ea-b520-0242ac110018 container dapi-container: STEP: delete the pod May 20 11:05:33.000: INFO: Waiting for pod downward-api-cf5243a9-9a89-11ea-b520-0242ac110018 to disappear May 20 11:05:33.012: INFO: Pod downward-api-cf5243a9-9a89-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:05:33.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vftqz" for this suite. May 20 11:05:39.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:05:39.052: INFO: namespace: e2e-tests-downward-api-vftqz, resource: bindings, ignored listing per whitelist May 20 11:05:39.119: INFO: namespace e2e-tests-downward-api-vftqz deletion completed in 6.10410188s • [SLOW TEST:12.444 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:05:39.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 20 11:05:39.259: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d6bbc1ca-9a89-11ea-b520-0242ac110018" in namespace "e2e-tests-downward-api-8jhfx" to be "success or failure" May 20 11:05:39.276: INFO: Pod "downwardapi-volume-d6bbc1ca-9a89-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.913196ms May 20 11:05:41.280: INFO: Pod "downwardapi-volume-d6bbc1ca-9a89-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02107362s May 20 11:05:43.293: INFO: Pod "downwardapi-volume-d6bbc1ca-9a89-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.033775442s May 20 11:05:45.298: INFO: Pod "downwardapi-volume-d6bbc1ca-9a89-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038835256s STEP: Saw pod success May 20 11:05:45.298: INFO: Pod "downwardapi-volume-d6bbc1ca-9a89-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:05:45.301: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-d6bbc1ca-9a89-11ea-b520-0242ac110018 container client-container: STEP: delete the pod May 20 11:05:45.332: INFO: Waiting for pod downwardapi-volume-d6bbc1ca-9a89-11ea-b520-0242ac110018 to disappear May 20 11:05:45.503: INFO: Pod downwardapi-volume-d6bbc1ca-9a89-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:05:45.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8jhfx" for this suite. May 20 11:05:51.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:05:51.607: INFO: namespace: e2e-tests-downward-api-8jhfx, resource: bindings, ignored listing per whitelist May 20 11:05:51.614: INFO: namespace e2e-tests-downward-api-8jhfx deletion completed in 6.107018996s • [SLOW TEST:12.495 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:05:51.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 20 11:05:51.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-2lhjg' May 20 11:05:54.613: INFO: stderr: "" May 20 11:05:54.613: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 May 20 11:05:54.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-2lhjg' May 20 11:05:59.368: INFO: stderr: "" May 20 11:05:59.368: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:05:59.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2lhjg" for this suite. May 20 11:06:05.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:06:05.550: INFO: namespace: e2e-tests-kubectl-2lhjg, resource: bindings, ignored listing per whitelist May 20 11:06:05.568: INFO: namespace e2e-tests-kubectl-2lhjg deletion completed in 6.100984117s • [SLOW TEST:13.954 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:06:05.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:06:05.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-z4lsz" for this suite. May 20 11:11:01.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:11:02.123: INFO: namespace: e2e-tests-pods-z4lsz, resource: bindings, ignored listing per whitelist May 20 11:11:02.177: INFO: namespace e2e-tests-pods-z4lsz deletion completed in 4m56.420428877s • [SLOW TEST:296.608 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:11:02.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 20 11:11:02.537: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:11:10.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-gj4hp" for this suite. May 20 11:11:32.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:11:32.846: INFO: namespace: e2e-tests-init-container-gj4hp, resource: bindings, ignored listing per whitelist May 20 11:11:32.867: INFO: namespace e2e-tests-init-container-gj4hp deletion completed in 22.090049222s • [SLOW TEST:30.690 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:11:32.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 20 11:11:33.022: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-f4w6x,SelfLink:/api/v1/namespaces/e2e-tests-watch-f4w6x/configmaps/e2e-watch-test-label-changed,UID:a9939316-9a8a-11ea-99e8-0242ac110002,ResourceVersion:11562406,Generation:0,CreationTimestamp:2020-05-20 11:11:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 20 11:11:33.022: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-f4w6x,SelfLink:/api/v1/namespaces/e2e-tests-watch-f4w6x/configmaps/e2e-watch-test-label-changed,UID:a9939316-9a8a-11ea-99e8-0242ac110002,ResourceVersion:11562407,Generation:0,CreationTimestamp:2020-05-20 11:11:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 20 11:11:33.022: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-f4w6x,SelfLink:/api/v1/namespaces/e2e-tests-watch-f4w6x/configmaps/e2e-watch-test-label-changed,UID:a9939316-9a8a-11ea-99e8-0242ac110002,ResourceVersion:11562408,Generation:0,CreationTimestamp:2020-05-20 11:11:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 20 11:11:43.082: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-f4w6x,SelfLink:/api/v1/namespaces/e2e-tests-watch-f4w6x/configmaps/e2e-watch-test-label-changed,UID:a9939316-9a8a-11ea-99e8-0242ac110002,ResourceVersion:11562429,Generation:0,CreationTimestamp:2020-05-20 11:11:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 20 11:11:43.082: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-f4w6x,SelfLink:/api/v1/namespaces/e2e-tests-watch-f4w6x/configmaps/e2e-watch-test-label-changed,UID:a9939316-9a8a-11ea-99e8-0242ac110002,ResourceVersion:11562430,Generation:0,CreationTimestamp:2020-05-20 11:11:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 20 11:11:43.082: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-f4w6x,SelfLink:/api/v1/namespaces/e2e-tests-watch-f4w6x/configmaps/e2e-watch-test-label-changed,UID:a9939316-9a8a-11ea-99e8-0242ac110002,ResourceVersion:11562431,Generation:0,CreationTimestamp:2020-05-20 11:11:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:11:43.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-f4w6x" for this suite. May 20 11:11:49.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:11:49.121: INFO: namespace: e2e-tests-watch-f4w6x, resource: bindings, ignored listing per whitelist May 20 11:11:49.176: INFO: namespace e2e-tests-watch-f4w6x deletion completed in 6.076644504s • [SLOW TEST:16.309 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:11:49.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium May 20 11:11:49.291: INFO: Waiting up to 5m0s for pod "pod-b35023f9-9a8a-11ea-b520-0242ac110018" in namespace "e2e-tests-emptydir-9stxf" to be "success or failure" May 20 11:11:49.307: INFO: Pod "pod-b35023f9-9a8a-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.10516ms May 20 11:11:51.316: INFO: Pod "pod-b35023f9-9a8a-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024975003s May 20 11:11:53.319: INFO: Pod "pod-b35023f9-9a8a-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028358743s STEP: Saw pod success May 20 11:11:53.319: INFO: Pod "pod-b35023f9-9a8a-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:11:53.322: INFO: Trying to get logs from node hunter-worker2 pod pod-b35023f9-9a8a-11ea-b520-0242ac110018 container test-container: STEP: delete the pod May 20 11:11:53.375: INFO: Waiting for pod pod-b35023f9-9a8a-11ea-b520-0242ac110018 to disappear May 20 11:11:53.390: INFO: Pod pod-b35023f9-9a8a-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:11:53.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9stxf" for this suite. May 20 11:11:59.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:11:59.710: INFO: namespace: e2e-tests-emptydir-9stxf, resource: bindings, ignored listing per whitelist May 20 11:11:59.838: INFO: namespace e2e-tests-emptydir-9stxf deletion completed in 6.444558988s • [SLOW TEST:10.661 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:11:59.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 20 11:12:00.843: INFO: Waiting up to 5m0s for pod "pod-ba315a0f-9a8a-11ea-b520-0242ac110018" in namespace "e2e-tests-emptydir-9lzp2" to be "success or failure" May 20 11:12:00.933: INFO: Pod "pod-ba315a0f-9a8a-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 90.274575ms May 20 11:12:02.938: INFO: Pod "pod-ba315a0f-9a8a-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094637264s May 20 11:12:04.951: INFO: Pod "pod-ba315a0f-9a8a-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107789169s STEP: Saw pod success May 20 11:12:04.951: INFO: Pod "pod-ba315a0f-9a8a-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:12:04.953: INFO: Trying to get logs from node hunter-worker2 pod pod-ba315a0f-9a8a-11ea-b520-0242ac110018 container test-container: STEP: delete the pod May 20 11:12:04.982: INFO: Waiting for pod pod-ba315a0f-9a8a-11ea-b520-0242ac110018 to disappear May 20 11:12:04.991: INFO: Pod pod-ba315a0f-9a8a-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:12:04.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9lzp2" for this suite. May 20 11:12:11.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:12:11.045: INFO: namespace: e2e-tests-emptydir-9lzp2, resource: bindings, ignored listing per whitelist May 20 11:12:11.087: INFO: namespace e2e-tests-emptydir-9lzp2 deletion completed in 6.092488695s • [SLOW TEST:11.249 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:12:11.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-c0ede945-9a8a-11ea-b520-0242ac110018 STEP: Creating a pod to test consume secrets May 20 11:12:12.679: INFO: Waiting up to 5m0s for pod "pod-secrets-c11c66f3-9a8a-11ea-b520-0242ac110018" in namespace "e2e-tests-secrets-qkldx" to be "success or failure" May 20 11:12:12.765: INFO: Pod "pod-secrets-c11c66f3-9a8a-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 85.490732ms May 20 11:12:14.770: INFO: Pod "pod-secrets-c11c66f3-9a8a-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090940318s May 20 11:12:16.775: INFO: Pod "pod-secrets-c11c66f3-9a8a-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095232048s May 20 11:12:18.779: INFO: Pod "pod-secrets-c11c66f3-9a8a-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.099504622s STEP: Saw pod success May 20 11:12:18.779: INFO: Pod "pod-secrets-c11c66f3-9a8a-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:12:18.782: INFO: Trying to get logs from node hunter-worker pod pod-secrets-c11c66f3-9a8a-11ea-b520-0242ac110018 container secret-volume-test: STEP: delete the pod May 20 11:12:18.833: INFO: Waiting for pod pod-secrets-c11c66f3-9a8a-11ea-b520-0242ac110018 to disappear May 20 11:12:18.840: INFO: Pod pod-secrets-c11c66f3-9a8a-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:12:18.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-qkldx" for this suite. May 20 11:12:24.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:12:24.895: INFO: namespace: e2e-tests-secrets-qkldx, resource: bindings, ignored listing per whitelist May 20 11:12:24.916: INFO: namespace e2e-tests-secrets-qkldx deletion completed in 6.072039322s • [SLOW TEST:13.828 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:12:24.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:12:31.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-knznz" for this suite. May 20 11:12:37.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:12:37.256: INFO: namespace: e2e-tests-namespaces-knznz, resource: bindings, ignored listing per whitelist May 20 11:12:37.303: INFO: namespace e2e-tests-namespaces-knznz deletion completed in 6.082213005s STEP: Destroying namespace "e2e-tests-nsdeletetest-hr88p" for this suite. May 20 11:12:37.305: INFO: Namespace e2e-tests-nsdeletetest-hr88p was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-qb5x8" for this suite. May 20 11:12:43.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:12:43.334: INFO: namespace: e2e-tests-nsdeletetest-qb5x8, resource: bindings, ignored listing per whitelist May 20 11:12:43.398: INFO: namespace e2e-tests-nsdeletetest-qb5x8 deletion completed in 6.092140125s • [SLOW TEST:18.482 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:12:43.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 20 11:12:43.485: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:12:52.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-6t4mq" for this suite. May 20 11:12:58.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:12:58.715: INFO: namespace: e2e-tests-init-container-6t4mq, resource: bindings, ignored listing per whitelist May 20 11:12:58.752: INFO: namespace e2e-tests-init-container-6t4mq deletion completed in 6.151969344s • [SLOW TEST:15.354 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:12:58.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 20 11:12:58.849: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dcc752ae-9a8a-11ea-b520-0242ac110018" in namespace "e2e-tests-downward-api-5ggvm" to be "success or failure" May 20 11:12:58.861: INFO: Pod "downwardapi-volume-dcc752ae-9a8a-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.835968ms May 20 11:13:00.865: INFO: Pod "downwardapi-volume-dcc752ae-9a8a-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016353041s May 20 11:13:02.881: INFO: Pod "downwardapi-volume-dcc752ae-9a8a-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032335729s May 20 11:13:04.885: INFO: Pod "downwardapi-volume-dcc752ae-9a8a-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036344662s STEP: Saw pod success May 20 11:13:04.885: INFO: Pod "downwardapi-volume-dcc752ae-9a8a-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:13:04.887: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-dcc752ae-9a8a-11ea-b520-0242ac110018 container client-container: STEP: delete the pod May 20 11:13:04.943: INFO: Waiting for pod downwardapi-volume-dcc752ae-9a8a-11ea-b520-0242ac110018 to disappear May 20 11:13:04.981: INFO: Pod downwardapi-volume-dcc752ae-9a8a-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:13:04.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5ggvm" for this suite. May 20 11:13:11.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:13:11.050: INFO: namespace: e2e-tests-downward-api-5ggvm, resource: bindings, ignored listing per whitelist May 20 11:13:11.090: INFO: namespace e2e-tests-downward-api-5ggvm deletion completed in 6.104546103s • [SLOW TEST:12.338 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:13:11.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-6v6vb May 20 11:13:17.261: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-6v6vb STEP: checking the pod's current state and verifying that restartCount is present May 20 11:13:17.264: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:17:18.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-6v6vb" for this suite. May 20 11:17:24.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:17:24.449: INFO: namespace: e2e-tests-container-probe-6v6vb, resource: bindings, ignored listing per whitelist May 20 11:17:24.449: INFO: namespace e2e-tests-container-probe-6v6vb deletion completed in 6.078391805s • [SLOW TEST:253.359 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:17:24.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed May 20 11:17:28.667: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-7b28ee3c-9a8b-11ea-b520-0242ac110018", GenerateName:"", Namespace:"e2e-tests-pods-b5tgv", SelfLink:"/api/v1/namespaces/e2e-tests-pods-b5tgv/pods/pod-submit-remove-7b28ee3c-9a8b-11ea-b520-0242ac110018", UID:"7b2bfb57-9a8b-11ea-99e8-0242ac110002", ResourceVersion:"11563289", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725570244, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"563313680"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-smddc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0000b3b00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-smddc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001ee1348), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001b690e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001ee1390)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001ee13d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001ee13d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001ee13dc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725570244, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725570247, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725570247, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725570244, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.85", StartTime:(*v1.Time)(0xc00127a840), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00127a860), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://b969bb7ef59c9094cf8411586cce1efdc117d614eec909d5c36f9cf907f9ccea"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:17:41.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-b5tgv" for this suite. May 20 11:17:48.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:17:48.615: INFO: namespace: e2e-tests-pods-b5tgv, resource: bindings, ignored listing per whitelist May 20 11:17:48.629: INFO: namespace e2e-tests-pods-b5tgv deletion completed in 6.776447174s • [SLOW TEST:24.179 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:17:48.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 20 11:17:48.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-tg8tx' May 20 11:17:52.969: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 20 11:17:52.969: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 20 11:17:55.035: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-lnkzn] May 20 11:17:55.035: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-lnkzn" in namespace "e2e-tests-kubectl-tg8tx" to be "running and ready" May 20 11:17:55.048: INFO: Pod "e2e-test-nginx-rc-lnkzn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.545737ms May 20 11:17:57.050: INFO: Pod "e2e-test-nginx-rc-lnkzn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015090737s May 20 11:17:59.053: INFO: Pod "e2e-test-nginx-rc-lnkzn": Phase="Running", Reason="", readiness=true. Elapsed: 4.018461429s May 20 11:17:59.053: INFO: Pod "e2e-test-nginx-rc-lnkzn" satisfied condition "running and ready" May 20 11:17:59.054: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-lnkzn] May 20 11:17:59.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-tg8tx' May 20 11:17:59.208: INFO: stderr: "" May 20 11:17:59.208: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 May 20 11:17:59.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-tg8tx' May 20 11:17:59.338: INFO: stderr: "" May 20 11:17:59.339: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:17:59.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tg8tx" for this suite. May 20 11:18:17.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:18:17.698: INFO: namespace: e2e-tests-kubectl-tg8tx, resource: bindings, ignored listing per whitelist May 20 11:18:17.857: INFO: namespace e2e-tests-kubectl-tg8tx deletion completed in 18.515432929s • [SLOW TEST:29.228 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:18:17.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 20 11:18:18.408: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9b40092e-9a8b-11ea-b520-0242ac110018" in namespace "e2e-tests-downward-api-j97q9" to be "success or failure" May 20 11:18:18.660: INFO: Pod "downwardapi-volume-9b40092e-9a8b-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 251.743068ms May 20 11:18:20.663: INFO: Pod "downwardapi-volume-9b40092e-9a8b-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254857267s May 20 11:18:22.802: INFO: Pod "downwardapi-volume-9b40092e-9a8b-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.393926428s May 20 11:18:24.933: INFO: Pod "downwardapi-volume-9b40092e-9a8b-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 6.525417076s May 20 11:18:26.939: INFO: Pod "downwardapi-volume-9b40092e-9a8b-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.531199457s STEP: Saw pod success May 20 11:18:26.939: INFO: Pod "downwardapi-volume-9b40092e-9a8b-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:18:26.942: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-9b40092e-9a8b-11ea-b520-0242ac110018 container client-container: STEP: delete the pod May 20 11:18:27.091: INFO: Waiting for pod downwardapi-volume-9b40092e-9a8b-11ea-b520-0242ac110018 to disappear May 20 11:18:27.115: INFO: Pod downwardapi-volume-9b40092e-9a8b-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:18:27.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-j97q9" for this suite. May 20 11:18:33.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:18:33.152: INFO: namespace: e2e-tests-downward-api-j97q9, resource: bindings, ignored listing per whitelist May 20 11:18:33.187: INFO: namespace e2e-tests-downward-api-j97q9 deletion completed in 6.069027916s • [SLOW TEST:15.330 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:18:33.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command May 20 11:18:33.284: INFO: Waiting up to 5m0s for pod "var-expansion-a41b20b4-9a8b-11ea-b520-0242ac110018" in namespace "e2e-tests-var-expansion-9t2s4" to be "success or failure" May 20 11:18:33.358: INFO: Pod "var-expansion-a41b20b4-9a8b-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 73.694213ms May 20 11:18:35.544: INFO: Pod "var-expansion-a41b20b4-9a8b-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.259292823s May 20 11:18:37.610: INFO: Pod "var-expansion-a41b20b4-9a8b-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.325137088s May 20 11:18:39.613: INFO: Pod "var-expansion-a41b20b4-9a8b-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.328592011s STEP: Saw pod success May 20 11:18:39.613: INFO: Pod "var-expansion-a41b20b4-9a8b-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:18:39.616: INFO: Trying to get logs from node hunter-worker pod var-expansion-a41b20b4-9a8b-11ea-b520-0242ac110018 container dapi-container: STEP: delete the pod May 20 11:18:39.707: INFO: Waiting for pod var-expansion-a41b20b4-9a8b-11ea-b520-0242ac110018 to disappear May 20 11:18:39.781: INFO: Pod var-expansion-a41b20b4-9a8b-11ea-b520-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:18:39.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-9t2s4" for this suite. May 20 11:18:45.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:18:45.874: INFO: namespace: e2e-tests-var-expansion-9t2s4, resource: bindings, ignored listing per whitelist May 20 11:18:45.880: INFO: namespace e2e-tests-var-expansion-9t2s4 deletion completed in 6.095408697s • [SLOW TEST:12.693 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:18:45.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-cn66w May 20 11:18:50.157: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-cn66w STEP: checking the pod's current state and verifying that restartCount is present May 20 11:18:50.159: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:22:51.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-cn66w" for this suite. May 20 11:22:57.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:22:57.870: INFO: namespace: e2e-tests-container-probe-cn66w, resource: bindings, ignored listing per whitelist May 20 11:22:57.879: INFO: namespace e2e-tests-container-probe-cn66w deletion completed in 6.098157732s • [SLOW TEST:251.998 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:22:57.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-41eac768-9a8c-11ea-b520-0242ac110018 STEP: Creating secret with name secret-projected-all-test-volume-41eac721-9a8c-11ea-b520-0242ac110018 STEP: Creating a pod to test Check all projections for projected volume plugin May 20 11:22:58.039: INFO: Waiting up to 5m0s for pod "projected-volume-41eac6ab-9a8c-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-pb6kb" to be "success or failure" May 20 11:22:58.043: INFO: Pod "projected-volume-41eac6ab-9a8c-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157487ms May 20 11:23:00.048: INFO: Pod "projected-volume-41eac6ab-9a8c-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008616642s May 20 11:23:02.052: INFO: Pod "projected-volume-41eac6ab-9a8c-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013199866s STEP: Saw pod success May 20 11:23:02.052: INFO: Pod "projected-volume-41eac6ab-9a8c-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:23:02.056: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-41eac6ab-9a8c-11ea-b520-0242ac110018 container projected-all-volume-test: STEP: delete the pod May 20 11:23:02.251: INFO: Waiting for pod projected-volume-41eac6ab-9a8c-11ea-b520-0242ac110018 to disappear May 20 11:23:02.270: INFO: Pod projected-volume-41eac6ab-9a8c-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:23:02.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pb6kb" for this suite. May 20 11:23:08.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:23:08.332: INFO: namespace: e2e-tests-projected-pb6kb, resource: bindings, ignored listing per whitelist May 20 11:23:08.379: INFO: namespace e2e-tests-projected-pb6kb deletion completed in 6.105215823s • [SLOW TEST:10.500 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:23:08.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-fkb4 STEP: Creating a pod to test atomic-volume-subpath May 20 11:23:08.562: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-fkb4" in namespace "e2e-tests-subpath-6hccd" to be "success or failure" May 20 11:23:08.583: INFO: Pod "pod-subpath-test-secret-fkb4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.38888ms May 20 11:23:10.588: INFO: Pod "pod-subpath-test-secret-fkb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025311532s May 20 11:23:12.592: INFO: Pod "pod-subpath-test-secret-fkb4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029664132s May 20 11:23:14.597: INFO: Pod "pod-subpath-test-secret-fkb4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034023216s May 20 11:23:16.601: INFO: Pod "pod-subpath-test-secret-fkb4": Phase="Running", Reason="", readiness=true. Elapsed: 8.038049881s May 20 11:23:18.605: INFO: Pod "pod-subpath-test-secret-fkb4": Phase="Running", Reason="", readiness=false. Elapsed: 10.04201617s May 20 11:23:20.608: INFO: Pod "pod-subpath-test-secret-fkb4": Phase="Running", Reason="", readiness=false. Elapsed: 12.045748785s May 20 11:23:22.686: INFO: Pod "pod-subpath-test-secret-fkb4": Phase="Running", Reason="", readiness=false. Elapsed: 14.123324319s May 20 11:23:24.690: INFO: Pod "pod-subpath-test-secret-fkb4": Phase="Running", Reason="", readiness=false. Elapsed: 16.127085638s May 20 11:23:26.694: INFO: Pod "pod-subpath-test-secret-fkb4": Phase="Running", Reason="", readiness=false. Elapsed: 18.131089823s May 20 11:23:28.697: INFO: Pod "pod-subpath-test-secret-fkb4": Phase="Running", Reason="", readiness=false. Elapsed: 20.134241576s May 20 11:23:30.699: INFO: Pod "pod-subpath-test-secret-fkb4": Phase="Running", Reason="", readiness=false. Elapsed: 22.136752083s May 20 11:23:32.703: INFO: Pod "pod-subpath-test-secret-fkb4": Phase="Running", Reason="", readiness=false. Elapsed: 24.14098097s May 20 11:23:34.707: INFO: Pod "pod-subpath-test-secret-fkb4": Phase="Running", Reason="", readiness=false. Elapsed: 26.144797167s May 20 11:23:36.710: INFO: Pod "pod-subpath-test-secret-fkb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.147683761s STEP: Saw pod success May 20 11:23:36.710: INFO: Pod "pod-subpath-test-secret-fkb4" satisfied condition "success or failure" May 20 11:23:36.712: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-secret-fkb4 container test-container-subpath-secret-fkb4: STEP: delete the pod May 20 11:23:36.745: INFO: Waiting for pod pod-subpath-test-secret-fkb4 to disappear May 20 11:23:36.756: INFO: Pod pod-subpath-test-secret-fkb4 no longer exists STEP: Deleting pod pod-subpath-test-secret-fkb4 May 20 11:23:36.756: INFO: Deleting pod "pod-subpath-test-secret-fkb4" in namespace "e2e-tests-subpath-6hccd" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:23:36.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-6hccd" for this suite. May 20 11:23:42.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:23:42.823: INFO: namespace: e2e-tests-subpath-6hccd, resource: bindings, ignored listing per whitelist May 20 11:23:42.870: INFO: namespace e2e-tests-subpath-6hccd deletion completed in 6.108638371s • [SLOW TEST:34.491 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:23:42.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin May 20 11:23:43.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-ndgxd run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 20 11:23:45.899: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0520 11:23:45.818525 418 log.go:172] (0xc000698370) (0xc0005dd4a0) Create stream\nI0520 11:23:45.818581 418 log.go:172] (0xc000698370) (0xc0005dd4a0) Stream added, broadcasting: 1\nI0520 11:23:45.820665 418 log.go:172] (0xc000698370) Reply frame received for 1\nI0520 11:23:45.820720 418 log.go:172] (0xc000698370) (0xc00037a5a0) Create stream\nI0520 11:23:45.820739 418 log.go:172] (0xc000698370) (0xc00037a5a0) Stream added, broadcasting: 3\nI0520 11:23:45.821873 418 log.go:172] (0xc000698370) Reply frame received for 3\nI0520 11:23:45.821905 418 log.go:172] (0xc000698370) (0xc00037a640) Create stream\nI0520 11:23:45.821916 418 log.go:172] (0xc000698370) (0xc00037a640) Stream added, broadcasting: 5\nI0520 11:23:45.822674 418 log.go:172] (0xc000698370) Reply frame received for 5\nI0520 11:23:45.822719 418 log.go:172] (0xc000698370) (0xc0005dd540) Create stream\nI0520 11:23:45.822739 418 log.go:172] (0xc000698370) (0xc0005dd540) Stream added, broadcasting: 7\nI0520 11:23:45.823594 418 log.go:172] (0xc000698370) Reply frame received for 7\nI0520 11:23:45.823797 418 log.go:172] (0xc00037a5a0) (3) Writing data frame\nI0520 11:23:45.823903 418 log.go:172] (0xc00037a5a0) (3) Writing data frame\nI0520 11:23:45.824958 418 log.go:172] (0xc000698370) Data frame received for 5\nI0520 11:23:45.824985 418 log.go:172] (0xc00037a640) (5) Data frame handling\nI0520 11:23:45.825001 418 log.go:172] (0xc00037a640) (5) Data frame sent\nI0520 11:23:45.825854 418 log.go:172] (0xc000698370) Data frame received for 5\nI0520 11:23:45.825882 418 log.go:172] (0xc00037a640) (5) Data frame handling\nI0520 11:23:45.825902 418 log.go:172] (0xc00037a640) (5) Data frame sent\nI0520 11:23:45.876052 418 log.go:172] (0xc000698370) Data frame received for 7\nI0520 11:23:45.876110 418 log.go:172] (0xc0005dd540) (7) Data frame handling\nI0520 11:23:45.876281 418 log.go:172] (0xc000698370) Data frame received for 5\nI0520 11:23:45.876316 418 log.go:172] (0xc00037a640) (5) Data frame handling\nI0520 11:23:45.876341 418 log.go:172] (0xc000698370) Data frame received for 1\nI0520 11:23:45.876347 418 log.go:172] (0xc0005dd4a0) (1) Data frame handling\nI0520 11:23:45.876361 418 log.go:172] (0xc0005dd4a0) (1) Data frame sent\nI0520 11:23:45.876374 418 log.go:172] (0xc000698370) (0xc0005dd4a0) Stream removed, broadcasting: 1\nI0520 11:23:45.876442 418 log.go:172] (0xc000698370) (0xc00037a5a0) Stream removed, broadcasting: 3\nI0520 11:23:45.876587 418 log.go:172] (0xc000698370) (0xc0005dd4a0) Stream removed, broadcasting: 1\nI0520 11:23:45.876628 418 log.go:172] (0xc000698370) (0xc00037a5a0) Stream removed, broadcasting: 3\nI0520 11:23:45.876667 418 log.go:172] (0xc000698370) Go away received\nI0520 11:23:45.876705 418 log.go:172] (0xc000698370) (0xc00037a640) Stream removed, broadcasting: 5\nI0520 11:23:45.876794 418 log.go:172] (0xc000698370) (0xc0005dd540) Stream removed, broadcasting: 7\n" May 20 11:23:45.899: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:23:47.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ndgxd" for this suite. May 20 11:23:53.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:23:53.975: INFO: namespace: e2e-tests-kubectl-ndgxd, resource: bindings, ignored listing per whitelist May 20 11:23:54.006: INFO: namespace e2e-tests-kubectl-ndgxd deletion completed in 6.098886097s • [SLOW TEST:11.135 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:23:54.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy May 20 11:23:54.113: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix783554060/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:23:54.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nnd6n" for this suite. May 20 11:24:00.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:24:00.325: INFO: namespace: e2e-tests-kubectl-nnd6n, resource: bindings, ignored listing per whitelist May 20 11:24:00.355: INFO: namespace e2e-tests-kubectl-nnd6n deletion completed in 6.156038342s • [SLOW TEST:6.350 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:24:00.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-67b42528-9a8c-11ea-b520-0242ac110018 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-67b42528-9a8c-11ea-b520-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:24:13.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-j9tx6" for this suite. May 20 11:24:35.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:24:36.043: INFO: namespace: e2e-tests-configmap-j9tx6, resource: bindings, ignored listing per whitelist May 20 11:24:36.059: INFO: namespace e2e-tests-configmap-j9tx6 deletion completed in 22.085550405s • [SLOW TEST:35.703 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:24:36.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 20 11:24:36.167: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 11:24:36.198: INFO: Waiting for terminating namespaces to be deleted... May 20 11:24:36.200: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 20 11:24:36.207: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 20 11:24:36.207: INFO: Container kube-proxy ready: true, restart count 0 May 20 11:24:36.207: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 20 11:24:36.207: INFO: Container kindnet-cni ready: true, restart count 0 May 20 11:24:36.207: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 20 11:24:36.207: INFO: Container coredns ready: true, restart count 0 May 20 11:24:36.207: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 20 11:24:36.212: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 20 11:24:36.212: INFO: Container kindnet-cni ready: true, restart count 0 May 20 11:24:36.212: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 20 11:24:36.212: INFO: Container coredns ready: true, restart count 0 May 20 11:24:36.212: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 20 11:24:36.212: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 May 20 11:24:36.316: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker May 20 11:24:36.316: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 May 20 11:24:36.316: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker May 20 11:24:36.316: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 May 20 11:24:36.316: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 May 20 11:24:36.316: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-7c81381d-9a8c-11ea-b520-0242ac110018.1610b8d104a6f91b], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-rlz49/filler-pod-7c81381d-9a8c-11ea-b520-0242ac110018 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-7c81381d-9a8c-11ea-b520-0242ac110018.1610b8d15757ee19], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-7c81381d-9a8c-11ea-b520-0242ac110018.1610b8d1cb86e454], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-7c81381d-9a8c-11ea-b520-0242ac110018.1610b8d1e3535e04], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-7c820191-9a8c-11ea-b520-0242ac110018.1610b8d10501054c], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-rlz49/filler-pod-7c820191-9a8c-11ea-b520-0242ac110018 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-7c820191-9a8c-11ea-b520-0242ac110018.1610b8d1a31bdadb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-7c820191-9a8c-11ea-b520-0242ac110018.1610b8d1eb6f9706], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-7c820191-9a8c-11ea-b520-0242ac110018.1610b8d1fb9032e1], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.1610b8d26ba43a96], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:24:43.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-rlz49" for this suite. May 20 11:24:51.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:24:51.845: INFO: namespace: e2e-tests-sched-pred-rlz49, resource: bindings, ignored listing per whitelist May 20 11:24:51.871: INFO: namespace e2e-tests-sched-pred-rlz49 deletion completed in 8.410804178s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:15.811 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:24:51.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 20 11:24:52.029: INFO: PodSpec: initContainers in spec.initContainers May 20 11:25:46.614: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-85deea5e-9a8c-11ea-b520-0242ac110018", GenerateName:"", Namespace:"e2e-tests-init-container-2v8sc", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-2v8sc/pods/pod-init-85deea5e-9a8c-11ea-b520-0242ac110018", UID:"85df90d5-9a8c-11ea-99e8-0242ac110002", ResourceVersion:"11564573", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725570692, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"29915358"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-ftjwt", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002380c40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ftjwt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ftjwt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ftjwt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0023b2538), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c34240), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0023b25c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0023b25e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0023b25e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0023b25ec)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725570692, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725570692, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725570692, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725570692, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.74", StartTime:(*v1.Time)(0xc001429100), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000697b90)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000697c70)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://d259498d417bcad6daaa0af4a92b2b58c3f8b29537ab5561adcbd49261a5cf55"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001429180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001429140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:25:46.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-2v8sc" for this suite. May 20 11:26:10.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:26:10.988: INFO: namespace: e2e-tests-init-container-2v8sc, resource: bindings, ignored listing per whitelist May 20 11:26:11.019: INFO: namespace e2e-tests-init-container-2v8sc deletion completed in 24.325243839s • [SLOW TEST:79.149 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:26:11.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 20 11:26:20.746: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:26:22.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-n9z8j" for this suite. May 20 11:26:48.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:26:48.274: INFO: namespace: e2e-tests-replicaset-n9z8j, resource: bindings, ignored listing per whitelist May 20 11:26:48.308: INFO: namespace e2e-tests-replicaset-n9z8j deletion completed in 26.276127847s • [SLOW TEST:37.288 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:26:48.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 20 11:26:48.729: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-lf6p6,SelfLink:/api/v1/namespaces/e2e-tests-watch-lf6p6/configmaps/e2e-watch-test-watch-closed,UID:cb67ac9a-9a8c-11ea-99e8-0242ac110002,ResourceVersion:11564754,Generation:0,CreationTimestamp:2020-05-20 11:26:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 20 11:26:48.729: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-lf6p6,SelfLink:/api/v1/namespaces/e2e-tests-watch-lf6p6/configmaps/e2e-watch-test-watch-closed,UID:cb67ac9a-9a8c-11ea-99e8-0242ac110002,ResourceVersion:11564755,Generation:0,CreationTimestamp:2020-05-20 11:26:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 20 11:26:48.852: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-lf6p6,SelfLink:/api/v1/namespaces/e2e-tests-watch-lf6p6/configmaps/e2e-watch-test-watch-closed,UID:cb67ac9a-9a8c-11ea-99e8-0242ac110002,ResourceVersion:11564756,Generation:0,CreationTimestamp:2020-05-20 11:26:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 20 11:26:48.852: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-lf6p6,SelfLink:/api/v1/namespaces/e2e-tests-watch-lf6p6/configmaps/e2e-watch-test-watch-closed,UID:cb67ac9a-9a8c-11ea-99e8-0242ac110002,ResourceVersion:11564757,Generation:0,CreationTimestamp:2020-05-20 11:26:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:26:48.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-lf6p6" for this suite. May 20 11:26:54.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:26:54.991: INFO: namespace: e2e-tests-watch-lf6p6, resource: bindings, ignored listing per whitelist May 20 11:26:55.039: INFO: namespace e2e-tests-watch-lf6p6 deletion completed in 6.126164354s • [SLOW TEST:6.731 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:26:55.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-k6jl7 in namespace e2e-tests-proxy-xv284 I0520 11:26:55.229622 7 runners.go:184] Created replication controller with name: proxy-service-k6jl7, namespace: e2e-tests-proxy-xv284, replica count: 1 I0520 11:26:56.280006 7 runners.go:184] proxy-service-k6jl7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 11:26:57.280230 7 runners.go:184] proxy-service-k6jl7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 11:26:58.280412 7 runners.go:184] proxy-service-k6jl7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 11:26:59.280622 7 runners.go:184] proxy-service-k6jl7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 11:27:00.280837 7 runners.go:184] proxy-service-k6jl7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0520 11:27:01.281106 7 runners.go:184] proxy-service-k6jl7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0520 11:27:02.281506 7 runners.go:184] proxy-service-k6jl7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0520 11:27:03.281748 7 runners.go:184] proxy-service-k6jl7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0520 11:27:04.281982 7 runners.go:184] proxy-service-k6jl7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0520 11:27:05.282196 7 runners.go:184] proxy-service-k6jl7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0520 11:27:06.282458 7 runners.go:184] proxy-service-k6jl7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0520 11:27:07.282707 7 runners.go:184] proxy-service-k6jl7 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 11:27:07.408: INFO: setup took 12.219978591s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 20 11:27:07.458: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-xv284/pods/http:proxy-service-k6jl7-whxhc:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-e2907e82-9a8c-11ea-b520-0242ac110018 STEP: Creating a pod to test consume configMaps May 20 11:27:27.567: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e290ee21-9a8c-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-gjhbp" to be "success or failure" May 20 11:27:27.576: INFO: Pod "pod-projected-configmaps-e290ee21-9a8c-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.874357ms May 20 11:27:29.636: INFO: Pod "pod-projected-configmaps-e290ee21-9a8c-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068194456s May 20 11:27:31.732: INFO: Pod "pod-projected-configmaps-e290ee21-9a8c-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164308776s May 20 11:27:33.804: INFO: Pod "pod-projected-configmaps-e290ee21-9a8c-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.236364099s STEP: Saw pod success May 20 11:27:33.804: INFO: Pod "pod-projected-configmaps-e290ee21-9a8c-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:27:33.807: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-e290ee21-9a8c-11ea-b520-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 20 11:27:34.070: INFO: Waiting for pod pod-projected-configmaps-e290ee21-9a8c-11ea-b520-0242ac110018 to disappear May 20 11:27:34.116: INFO: Pod pod-projected-configmaps-e290ee21-9a8c-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:27:34.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gjhbp" for this suite. May 20 11:27:40.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:27:40.533: INFO: namespace: e2e-tests-projected-gjhbp, resource: bindings, ignored listing per whitelist May 20 11:27:40.594: INFO: namespace e2e-tests-projected-gjhbp deletion completed in 6.291837484s • [SLOW TEST:13.166 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:27:40.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:27:47.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-lrnbn" for this suite. May 20 11:28:09.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:28:09.886: INFO: namespace: e2e-tests-replication-controller-lrnbn, resource: bindings, ignored listing per whitelist May 20 11:28:09.920: INFO: namespace e2e-tests-replication-controller-lrnbn deletion completed in 22.100708416s • [SLOW TEST:29.326 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:28:09.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-fbe7893c-9a8c-11ea-b520-0242ac110018 STEP: Creating a pod to test consume secrets May 20 11:28:10.076: INFO: Waiting up to 5m0s for pod "pod-secrets-fbe94a2a-9a8c-11ea-b520-0242ac110018" in namespace "e2e-tests-secrets-6tsjv" to be "success or failure" May 20 11:28:10.127: INFO: Pod "pod-secrets-fbe94a2a-9a8c-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 50.990208ms May 20 11:28:12.131: INFO: Pod "pod-secrets-fbe94a2a-9a8c-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055109555s May 20 11:28:14.134: INFO: Pod "pod-secrets-fbe94a2a-9a8c-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058454183s May 20 11:28:16.139: INFO: Pod "pod-secrets-fbe94a2a-9a8c-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063136082s STEP: Saw pod success May 20 11:28:16.139: INFO: Pod "pod-secrets-fbe94a2a-9a8c-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:28:16.143: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-fbe94a2a-9a8c-11ea-b520-0242ac110018 container secret-env-test: STEP: delete the pod May 20 11:28:16.180: INFO: Waiting for pod pod-secrets-fbe94a2a-9a8c-11ea-b520-0242ac110018 to disappear May 20 11:28:16.189: INFO: Pod pod-secrets-fbe94a2a-9a8c-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:28:16.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-6tsjv" for this suite. May 20 11:28:22.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:28:22.284: INFO: namespace: e2e-tests-secrets-6tsjv, resource: bindings, ignored listing per whitelist May 20 11:28:22.304: INFO: namespace e2e-tests-secrets-6tsjv deletion completed in 6.086113046s • [SLOW TEST:12.383 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:28:22.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 20 11:28:26.549: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-0346beb7-9a8d-11ea-b520-0242ac110018,GenerateName:,Namespace:e2e-tests-events-b7ffs,SelfLink:/api/v1/namespaces/e2e-tests-events-b7ffs/pods/send-events-0346beb7-9a8d-11ea-b520-0242ac110018,UID:0354b1f8-9a8d-11ea-99e8-0242ac110002,ResourceVersion:11565085,Generation:0,CreationTimestamp:2020-05-20 11:28:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 425573500,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9vz6r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9vz6r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-9vz6r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021fea70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021fea90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:28:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:28:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:28:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:28:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.77,StartTime:2020-05-20 11:28:22 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-20 11:28:25 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://83d59c9b6fa5523603609b4c09600830fc85a2f4870e90774442839914bd958e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 20 11:28:28.555: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 20 11:28:30.673: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:28:30.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-b7ffs" for this suite. May 20 11:29:12.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:29:12.838: INFO: namespace: e2e-tests-events-b7ffs, resource: bindings, ignored listing per whitelist May 20 11:29:12.859: INFO: namespace e2e-tests-events-b7ffs deletion completed in 42.172613738s • [SLOW TEST:50.555 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:29:12.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 20 11:29:13.179: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"2182cd6b-9a8d-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00246d0ea), BlockOwnerDeletion:(*bool)(0xc00246d0eb)}} May 20 11:29:13.191: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"2181a1b6-9a8d-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0023c15f2), BlockOwnerDeletion:(*bool)(0xc0023c15f3)}} May 20 11:29:13.213: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"2182295a-9a8d-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00246d2c2), BlockOwnerDeletion:(*bool)(0xc00246d2c3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:29:18.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-8c5sh" for this suite. May 20 11:29:24.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:29:24.394: INFO: namespace: e2e-tests-gc-8c5sh, resource: bindings, ignored listing per whitelist May 20 11:29:24.416: INFO: namespace e2e-tests-gc-8c5sh deletion completed in 6.109250828s • [SLOW TEST:11.556 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:29:24.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-2850e0f8-9a8d-11ea-b520-0242ac110018 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-2850e0f8-9a8d-11ea-b520-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:29:30.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jqx4t" for this suite. May 20 11:29:52.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:29:52.748: INFO: namespace: e2e-tests-projected-jqx4t, resource: bindings, ignored listing per whitelist May 20 11:29:52.772: INFO: namespace e2e-tests-projected-jqx4t deletion completed in 22.08048175s • [SLOW TEST:28.356 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:29:52.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 20 11:29:52.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hkzxt' May 20 11:29:55.392: INFO: stderr: "" May 20 11:29:55.392: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 20 11:29:55.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hkzxt' May 20 11:29:55.511: INFO: stderr: "" May 20 11:29:55.511: INFO: stdout: "update-demo-nautilus-458fw update-demo-nautilus-pn8xb " May 20 11:29:55.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-458fw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hkzxt' May 20 11:29:55.607: INFO: stderr: "" May 20 11:29:55.607: INFO: stdout: "" May 20 11:29:55.607: INFO: update-demo-nautilus-458fw is created but not running May 20 11:30:00.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:00.712: INFO: stderr: "" May 20 11:30:00.712: INFO: stdout: "update-demo-nautilus-458fw update-demo-nautilus-pn8xb " May 20 11:30:00.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-458fw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:00.814: INFO: stderr: "" May 20 11:30:00.814: INFO: stdout: "true" May 20 11:30:00.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-458fw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:00.917: INFO: stderr: "" May 20 11:30:00.917: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 11:30:00.917: INFO: validating pod update-demo-nautilus-458fw May 20 11:30:00.922: INFO: got data: { "image": "nautilus.jpg" } May 20 11:30:00.922: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 11:30:00.923: INFO: update-demo-nautilus-458fw is verified up and running May 20 11:30:00.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pn8xb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:01.032: INFO: stderr: "" May 20 11:30:01.032: INFO: stdout: "true" May 20 11:30:01.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pn8xb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:01.209: INFO: stderr: "" May 20 11:30:01.209: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 11:30:01.209: INFO: validating pod update-demo-nautilus-pn8xb May 20 11:30:01.215: INFO: got data: { "image": "nautilus.jpg" } May 20 11:30:01.215: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 11:30:01.215: INFO: update-demo-nautilus-pn8xb is verified up and running STEP: scaling down the replication controller May 20 11:30:01.218: INFO: scanned /root for discovery docs: May 20 11:30:01.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:02.354: INFO: stderr: "" May 20 11:30:02.354: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 20 11:30:02.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:02.464: INFO: stderr: "" May 20 11:30:02.464: INFO: stdout: "update-demo-nautilus-458fw update-demo-nautilus-pn8xb " STEP: Replicas for name=update-demo: expected=1 actual=2 May 20 11:30:07.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:07.556: INFO: stderr: "" May 20 11:30:07.556: INFO: stdout: "update-demo-nautilus-458fw update-demo-nautilus-pn8xb " STEP: Replicas for name=update-demo: expected=1 actual=2 May 20 11:30:12.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:12.682: INFO: stderr: "" May 20 11:30:12.682: INFO: stdout: "update-demo-nautilus-458fw " May 20 11:30:12.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-458fw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:12.780: INFO: stderr: "" May 20 11:30:12.780: INFO: stdout: "true" May 20 11:30:12.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-458fw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:12.876: INFO: stderr: "" May 20 11:30:12.876: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 11:30:12.876: INFO: validating pod update-demo-nautilus-458fw May 20 11:30:12.880: INFO: got data: { "image": "nautilus.jpg" } May 20 11:30:12.880: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 11:30:12.880: INFO: update-demo-nautilus-458fw is verified up and running STEP: scaling up the replication controller May 20 11:30:12.882: INFO: scanned /root for discovery docs: May 20 11:30:12.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:14.010: INFO: stderr: "" May 20 11:30:14.010: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 20 11:30:14.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:14.126: INFO: stderr: "" May 20 11:30:14.126: INFO: stdout: "update-demo-nautilus-458fw update-demo-nautilus-zxzwj " May 20 11:30:14.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-458fw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:14.226: INFO: stderr: "" May 20 11:30:14.226: INFO: stdout: "true" May 20 11:30:14.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-458fw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:14.335: INFO: stderr: "" May 20 11:30:14.335: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 11:30:14.335: INFO: validating pod update-demo-nautilus-458fw May 20 11:30:14.361: INFO: got data: { "image": "nautilus.jpg" } May 20 11:30:14.361: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 11:30:14.361: INFO: update-demo-nautilus-458fw is verified up and running May 20 11:30:14.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zxzwj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:14.456: INFO: stderr: "" May 20 11:30:14.456: INFO: stdout: "" May 20 11:30:14.456: INFO: update-demo-nautilus-zxzwj is created but not running May 20 11:30:19.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:19.571: INFO: stderr: "" May 20 11:30:19.571: INFO: stdout: "update-demo-nautilus-458fw update-demo-nautilus-zxzwj " May 20 11:30:19.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-458fw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:19.682: INFO: stderr: "" May 20 11:30:19.682: INFO: stdout: "true" May 20 11:30:19.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-458fw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:19.781: INFO: stderr: "" May 20 11:30:19.781: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 11:30:19.781: INFO: validating pod update-demo-nautilus-458fw May 20 11:30:19.784: INFO: got data: { "image": "nautilus.jpg" } May 20 11:30:19.784: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 11:30:19.784: INFO: update-demo-nautilus-458fw is verified up and running May 20 11:30:19.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zxzwj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:19.892: INFO: stderr: "" May 20 11:30:19.892: INFO: stdout: "true" May 20 11:30:19.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zxzwj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:19.991: INFO: stderr: "" May 20 11:30:19.991: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 11:30:19.991: INFO: validating pod update-demo-nautilus-zxzwj May 20 11:30:19.995: INFO: got data: { "image": "nautilus.jpg" } May 20 11:30:19.995: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 11:30:19.995: INFO: update-demo-nautilus-zxzwj is verified up and running STEP: using delete to clean up resources May 20 11:30:19.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:20.105: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 11:30:20.105: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 20 11:30:20.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-hkzxt' May 20 11:30:20.209: INFO: stderr: "No resources found.\n" May 20 11:30:20.209: INFO: stdout: "" May 20 11:30:20.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-hkzxt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 20 11:30:20.314: INFO: stderr: "" May 20 11:30:20.314: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:30:20.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hkzxt" for this suite. May 20 11:30:42.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:30:42.451: INFO: namespace: e2e-tests-kubectl-hkzxt, resource: bindings, ignored listing per whitelist May 20 11:30:42.457: INFO: namespace e2e-tests-kubectl-hkzxt deletion completed in 22.139340631s • [SLOW TEST:49.685 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:30:42.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:31:16.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-x48zv" for this suite. May 20 11:31:22.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:31:22.834: INFO: namespace: e2e-tests-container-runtime-x48zv, resource: bindings, ignored listing per whitelist May 20 11:31:22.836: INFO: namespace e2e-tests-container-runtime-x48zv deletion completed in 6.218804945s • [SLOW TEST:40.378 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:31:22.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 20 11:31:22.953: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6edda0bc-9a8d-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-8dzzs" to be "success or failure" May 20 11:31:22.969: INFO: Pod "downwardapi-volume-6edda0bc-9a8d-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.090127ms May 20 11:31:24.974: INFO: Pod "downwardapi-volume-6edda0bc-9a8d-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020687518s May 20 11:31:26.978: INFO: Pod "downwardapi-volume-6edda0bc-9a8d-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024518322s STEP: Saw pod success May 20 11:31:26.978: INFO: Pod "downwardapi-volume-6edda0bc-9a8d-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:31:26.980: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-6edda0bc-9a8d-11ea-b520-0242ac110018 container client-container: STEP: delete the pod May 20 11:31:27.189: INFO: Waiting for pod downwardapi-volume-6edda0bc-9a8d-11ea-b520-0242ac110018 to disappear May 20 11:31:27.192: INFO: Pod downwardapi-volume-6edda0bc-9a8d-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:31:27.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8dzzs" for this suite. May 20 11:31:33.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:31:33.223: INFO: namespace: e2e-tests-projected-8dzzs, resource: bindings, ignored listing per whitelist May 20 11:31:33.282: INFO: namespace e2e-tests-projected-8dzzs deletion completed in 6.083184844s • [SLOW TEST:10.446 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:31:33.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args May 20 11:31:33.412: INFO: Waiting up to 5m0s for pod "var-expansion-7519d532-9a8d-11ea-b520-0242ac110018" in namespace "e2e-tests-var-expansion-hzlnh" to be "success or failure" May 20 11:31:33.422: INFO: Pod "var-expansion-7519d532-9a8d-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.535556ms May 20 11:31:35.427: INFO: Pod "var-expansion-7519d532-9a8d-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015087053s May 20 11:31:37.431: INFO: Pod "var-expansion-7519d532-9a8d-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019036402s STEP: Saw pod success May 20 11:31:37.431: INFO: Pod "var-expansion-7519d532-9a8d-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:31:37.434: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-7519d532-9a8d-11ea-b520-0242ac110018 container dapi-container: STEP: delete the pod May 20 11:31:37.473: INFO: Waiting for pod var-expansion-7519d532-9a8d-11ea-b520-0242ac110018 to disappear May 20 11:31:37.488: INFO: Pod var-expansion-7519d532-9a8d-11ea-b520-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:31:37.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-hzlnh" for this suite. May 20 11:31:43.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:31:43.530: INFO: namespace: e2e-tests-var-expansion-hzlnh, resource: bindings, ignored listing per whitelist May 20 11:31:43.566: INFO: namespace e2e-tests-var-expansion-hzlnh deletion completed in 6.075185853s • [SLOW TEST:10.284 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:31:43.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 20 11:31:43.728: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 20 11:31:43.752: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:31:43.754: INFO: Number of nodes with available pods: 0 May 20 11:31:43.754: INFO: Node hunter-worker is running more than one daemon pod May 20 11:31:44.759: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:31:44.762: INFO: Number of nodes with available pods: 0 May 20 11:31:44.762: INFO: Node hunter-worker is running more than one daemon pod May 20 11:31:45.759: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:31:45.761: INFO: Number of nodes with available pods: 0 May 20 11:31:45.761: INFO: Node hunter-worker is running more than one daemon pod May 20 11:31:46.819: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:31:46.822: INFO: Number of nodes with available pods: 0 May 20 11:31:46.822: INFO: Node hunter-worker is running more than one daemon pod May 20 11:31:47.759: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:31:47.762: INFO: Number of nodes with available pods: 0 May 20 11:31:47.762: INFO: Node hunter-worker is running more than one daemon pod May 20 11:31:48.760: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:31:48.764: INFO: Number of nodes with available pods: 2 May 20 11:31:48.764: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 20 11:31:48.792: INFO: Wrong image for pod: daemon-set-qsqjr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:48.792: INFO: Wrong image for pod: daemon-set-rxb7x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:48.815: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:31:49.819: INFO: Wrong image for pod: daemon-set-qsqjr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:49.819: INFO: Wrong image for pod: daemon-set-rxb7x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:49.822: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:31:50.820: INFO: Wrong image for pod: daemon-set-qsqjr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:50.820: INFO: Wrong image for pod: daemon-set-rxb7x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:50.824: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:31:51.819: INFO: Wrong image for pod: daemon-set-qsqjr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:51.819: INFO: Wrong image for pod: daemon-set-rxb7x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:51.822: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:31:52.819: INFO: Wrong image for pod: daemon-set-qsqjr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:52.819: INFO: Wrong image for pod: daemon-set-rxb7x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:52.819: INFO: Pod daemon-set-rxb7x is not available May 20 11:31:52.823: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:31:53.819: INFO: Wrong image for pod: daemon-set-qsqjr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:53.819: INFO: Wrong image for pod: daemon-set-rxb7x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:53.819: INFO: Pod daemon-set-rxb7x is not available May 20 11:31:53.822: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:31:54.819: INFO: Wrong image for pod: daemon-set-qsqjr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:54.819: INFO: Wrong image for pod: daemon-set-rxb7x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:54.820: INFO: Pod daemon-set-rxb7x is not available May 20 11:31:54.824: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:31:55.819: INFO: Wrong image for pod: daemon-set-qsqjr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:55.819: INFO: Wrong image for pod: daemon-set-rxb7x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:55.819: INFO: Pod daemon-set-rxb7x is not available May 20 11:31:55.823: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:31:56.819: INFO: Wrong image for pod: daemon-set-qsqjr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:56.819: INFO: Wrong image for pod: daemon-set-rxb7x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:56.819: INFO: Pod daemon-set-rxb7x is not available May 20 11:31:56.822: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:31:57.844: INFO: Wrong image for pod: daemon-set-qsqjr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:57.844: INFO: Wrong image for pod: daemon-set-rxb7x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:57.844: INFO: Pod daemon-set-rxb7x is not available May 20 11:31:57.848: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:31:58.820: INFO: Wrong image for pod: daemon-set-qsqjr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:58.820: INFO: Wrong image for pod: daemon-set-rxb7x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:58.820: INFO: Pod daemon-set-rxb7x is not available May 20 11:31:58.824: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:31:59.838: INFO: Wrong image for pod: daemon-set-qsqjr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:59.838: INFO: Wrong image for pod: daemon-set-rxb7x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:31:59.838: INFO: Pod daemon-set-rxb7x is not available May 20 11:31:59.842: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:32:00.820: INFO: Wrong image for pod: daemon-set-qsqjr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:32:00.820: INFO: Wrong image for pod: daemon-set-rxb7x. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:32:00.820: INFO: Pod daemon-set-rxb7x is not available May 20 11:32:00.825: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:32:01.818: INFO: Wrong image for pod: daemon-set-qsqjr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:32:01.818: INFO: Pod daemon-set-vwdp7 is not available May 20 11:32:01.832: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:32:02.818: INFO: Wrong image for pod: daemon-set-qsqjr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:32:02.818: INFO: Pod daemon-set-vwdp7 is not available May 20 11:32:02.822: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:32:03.819: INFO: Wrong image for pod: daemon-set-qsqjr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:32:03.819: INFO: Pod daemon-set-vwdp7 is not available May 20 11:32:03.823: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:32:04.819: INFO: Wrong image for pod: daemon-set-qsqjr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:32:04.819: INFO: Pod daemon-set-vwdp7 is not available May 20 11:32:04.822: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:32:05.820: INFO: Wrong image for pod: daemon-set-qsqjr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:32:05.823: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:32:06.819: INFO: Wrong image for pod: daemon-set-qsqjr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 20 11:32:06.819: INFO: Pod daemon-set-qsqjr is not available May 20 11:32:06.824: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:32:07.819: INFO: Pod daemon-set-p2n2w is not available May 20 11:32:07.823: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 20 11:32:07.827: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:32:07.829: INFO: Number of nodes with available pods: 1 May 20 11:32:07.829: INFO: Node hunter-worker is running more than one daemon pod May 20 11:32:08.835: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:32:08.838: INFO: Number of nodes with available pods: 1 May 20 11:32:08.838: INFO: Node hunter-worker is running more than one daemon pod May 20 11:32:09.835: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:32:09.838: INFO: Number of nodes with available pods: 1 May 20 11:32:09.838: INFO: Node hunter-worker is running more than one daemon pod May 20 11:32:10.835: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:32:10.838: INFO: Number of nodes with available pods: 1 May 20 11:32:10.838: INFO: Node hunter-worker is running more than one daemon pod May 20 11:32:11.832: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 11:32:11.835: INFO: Number of nodes with available pods: 2 May 20 11:32:11.835: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-nptpw, will wait for the garbage collector to delete the pods May 20 11:32:11.948: INFO: Deleting DaemonSet.extensions daemon-set took: 6.695846ms May 20 11:32:12.048: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.284554ms May 20 11:32:21.751: INFO: Number of nodes with available pods: 0 May 20 11:32:21.751: INFO: Number of running nodes: 0, number of available pods: 0 May 20 11:32:21.754: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-nptpw/daemonsets","resourceVersion":"11565904"},"items":null} May 20 11:32:21.757: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-nptpw/pods","resourceVersion":"11565904"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:32:21.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-nptpw" for this suite. May 20 11:32:27.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:32:27.864: INFO: namespace: e2e-tests-daemonsets-nptpw, resource: bindings, ignored listing per whitelist May 20 11:32:27.881: INFO: namespace e2e-tests-daemonsets-nptpw deletion completed in 6.088463385s • [SLOW TEST:44.315 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:32:27.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 20 11:32:28.011: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95a723a5-9a8d-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-kfmpc" to be "success or failure" May 20 11:32:28.034: INFO: Pod "downwardapi-volume-95a723a5-9a8d-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.951037ms May 20 11:32:30.182: INFO: Pod "downwardapi-volume-95a723a5-9a8d-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170608439s May 20 11:32:32.186: INFO: Pod "downwardapi-volume-95a723a5-9a8d-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.174838839s STEP: Saw pod success May 20 11:32:32.186: INFO: Pod "downwardapi-volume-95a723a5-9a8d-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:32:32.188: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-95a723a5-9a8d-11ea-b520-0242ac110018 container client-container: STEP: delete the pod May 20 11:32:32.228: INFO: Waiting for pod downwardapi-volume-95a723a5-9a8d-11ea-b520-0242ac110018 to disappear May 20 11:32:32.250: INFO: Pod downwardapi-volume-95a723a5-9a8d-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:32:32.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kfmpc" for this suite. May 20 11:32:38.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:32:38.387: INFO: namespace: e2e-tests-projected-kfmpc, resource: bindings, ignored listing per whitelist May 20 11:32:38.390: INFO: namespace e2e-tests-projected-kfmpc deletion completed in 6.135313036s • [SLOW TEST:10.508 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:32:38.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token May 20 11:32:39.017: INFO: Waiting up to 5m0s for pod "pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-5jtwt" in namespace "e2e-tests-svcaccounts-gtlzx" to be "success or failure" May 20 11:32:39.023: INFO: Pod "pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-5jtwt": Phase="Pending", Reason="", readiness=false. Elapsed: 5.596792ms May 20 11:32:41.233: INFO: Pod "pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-5jtwt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216081317s May 20 11:32:43.237: INFO: Pod "pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-5jtwt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22011774s May 20 11:32:45.242: INFO: Pod "pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-5jtwt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.224699418s STEP: Saw pod success May 20 11:32:45.242: INFO: Pod "pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-5jtwt" satisfied condition "success or failure" May 20 11:32:45.245: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-5jtwt container token-test: STEP: delete the pod May 20 11:32:45.372: INFO: Waiting for pod pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-5jtwt to disappear May 20 11:32:45.379: INFO: Pod pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-5jtwt no longer exists STEP: Creating a pod to test consume service account root CA May 20 11:32:45.382: INFO: Waiting up to 5m0s for pod "pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-6dx29" in namespace "e2e-tests-svcaccounts-gtlzx" to be "success or failure" May 20 11:32:45.400: INFO: Pod "pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-6dx29": Phase="Pending", Reason="", readiness=false. Elapsed: 17.154793ms May 20 11:32:47.403: INFO: Pod "pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-6dx29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020978269s May 20 11:32:49.462: INFO: Pod "pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-6dx29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07912087s May 20 11:32:51.466: INFO: Pod "pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-6dx29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.083473218s STEP: Saw pod success May 20 11:32:51.466: INFO: Pod "pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-6dx29" satisfied condition "success or failure" May 20 11:32:51.468: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-6dx29 container root-ca-test: STEP: delete the pod May 20 11:32:51.512: INFO: Waiting for pod pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-6dx29 to disappear May 20 11:32:51.517: INFO: Pod pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-6dx29 no longer exists STEP: Creating a pod to test consume service account namespace May 20 11:32:51.521: INFO: Waiting up to 5m0s for pod "pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-x5dnn" in namespace "e2e-tests-svcaccounts-gtlzx" to be "success or failure" May 20 11:32:51.523: INFO: Pod "pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-x5dnn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223365ms May 20 11:32:53.527: INFO: Pod "pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-x5dnn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005585503s May 20 11:32:55.695: INFO: Pod "pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-x5dnn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174145673s May 20 11:32:57.700: INFO: Pod "pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-x5dnn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.17879824s STEP: Saw pod success May 20 11:32:57.700: INFO: Pod "pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-x5dnn" satisfied condition "success or failure" May 20 11:32:57.703: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-x5dnn container namespace-test: STEP: delete the pod May 20 11:32:57.735: INFO: Waiting for pod pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-x5dnn to disappear May 20 11:32:57.742: INFO: Pod pod-service-account-9c36a884-9a8d-11ea-b520-0242ac110018-x5dnn no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:32:57.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-gtlzx" for this suite. May 20 11:33:03.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:33:03.829: INFO: namespace: e2e-tests-svcaccounts-gtlzx, resource: bindings, ignored listing per whitelist May 20 11:33:03.842: INFO: namespace e2e-tests-svcaccounts-gtlzx deletion completed in 6.096389325s • [SLOW TEST:25.452 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:33:03.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 20 11:33:03.987: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab17941e-9a8d-11ea-b520-0242ac110018" in namespace "e2e-tests-downward-api-b6f7s" to be "success or failure" May 20 11:33:03.991: INFO: Pod "downwardapi-volume-ab17941e-9a8d-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.322531ms May 20 11:33:05.995: INFO: Pod "downwardapi-volume-ab17941e-9a8d-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007794231s May 20 11:33:08.000: INFO: Pod "downwardapi-volume-ab17941e-9a8d-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012202926s STEP: Saw pod success May 20 11:33:08.000: INFO: Pod "downwardapi-volume-ab17941e-9a8d-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:33:08.002: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ab17941e-9a8d-11ea-b520-0242ac110018 container client-container: STEP: delete the pod May 20 11:33:08.046: INFO: Waiting for pod downwardapi-volume-ab17941e-9a8d-11ea-b520-0242ac110018 to disappear May 20 11:33:08.057: INFO: Pod downwardapi-volume-ab17941e-9a8d-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:33:08.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-b6f7s" for this suite. May 20 11:33:14.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:33:14.134: INFO: namespace: e2e-tests-downward-api-b6f7s, resource: bindings, ignored listing per whitelist May 20 11:33:14.144: INFO: namespace e2e-tests-downward-api-b6f7s deletion completed in 6.08428366s • [SLOW TEST:10.302 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:33:14.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 20 11:33:14.299: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.860255ms) May 20 11:33:14.302: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.002238ms) May 20 11:33:14.305: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.217766ms) May 20 11:33:14.308: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.011887ms) May 20 11:33:14.311: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.915247ms) May 20 11:33:14.314: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.03523ms) May 20 11:33:14.317: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.116596ms) May 20 11:33:14.321: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.557886ms) May 20 11:33:14.324: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.192803ms) May 20 11:33:14.327: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.236867ms) May 20 11:33:14.331: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.558952ms) May 20 11:33:14.334: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.352497ms) May 20 11:33:14.338: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.362022ms) May 20 11:33:14.341: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.572113ms) May 20 11:33:14.345: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.18246ms) May 20 11:33:14.378: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 33.631513ms) May 20 11:33:14.382: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.511301ms) May 20 11:33:14.386: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.255314ms) May 20 11:33:14.390: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.602699ms) May 20 11:33:14.393: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.597937ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:33:14.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-f8lt8" for this suite. May 20 11:33:20.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:33:20.460: INFO: namespace: e2e-tests-proxy-f8lt8, resource: bindings, ignored listing per whitelist May 20 11:33:20.488: INFO: namespace e2e-tests-proxy-f8lt8 deletion completed in 6.090956221s • [SLOW TEST:6.344 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:33:20.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 20 11:33:20.590: INFO: Waiting up to 5m0s for pod "pod-b4fc677b-9a8d-11ea-b520-0242ac110018" in namespace "e2e-tests-emptydir-7fpm2" to be "success or failure" May 20 11:33:20.647: INFO: Pod "pod-b4fc677b-9a8d-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 56.360856ms May 20 11:33:22.731: INFO: Pod "pod-b4fc677b-9a8d-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140472394s May 20 11:33:24.735: INFO: Pod "pod-b4fc677b-9a8d-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.145001874s STEP: Saw pod success May 20 11:33:24.735: INFO: Pod "pod-b4fc677b-9a8d-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:33:24.739: INFO: Trying to get logs from node hunter-worker pod pod-b4fc677b-9a8d-11ea-b520-0242ac110018 container test-container: STEP: delete the pod May 20 11:33:24.920: INFO: Waiting for pod pod-b4fc677b-9a8d-11ea-b520-0242ac110018 to disappear May 20 11:33:24.928: INFO: Pod pod-b4fc677b-9a8d-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:33:24.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-7fpm2" for this suite. May 20 11:33:30.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:33:31.082: INFO: namespace: e2e-tests-emptydir-7fpm2, resource: bindings, ignored listing per whitelist May 20 11:33:31.089: INFO: namespace e2e-tests-emptydir-7fpm2 deletion completed in 6.156786939s • [SLOW TEST:10.601 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:33:31.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-bb4f2c73-9a8d-11ea-b520-0242ac110018 STEP: Creating secret with name s-test-opt-upd-bb4f2ce5-9a8d-11ea-b520-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-bb4f2c73-9a8d-11ea-b520-0242ac110018 STEP: Updating secret s-test-opt-upd-bb4f2ce5-9a8d-11ea-b520-0242ac110018 STEP: Creating secret with name s-test-opt-create-bb4f2d0c-9a8d-11ea-b520-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:34:41.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-n9bm6" for this suite. May 20 11:35:03.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:35:03.728: INFO: namespace: e2e-tests-projected-n9bm6, resource: bindings, ignored listing per whitelist May 20 11:35:03.780: INFO: namespace e2e-tests-projected-n9bm6 deletion completed in 22.115568392s • [SLOW TEST:92.691 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:35:03.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 20 11:35:03.924: INFO: Waiting up to 5m0s for pod "pod-f292cfeb-9a8d-11ea-b520-0242ac110018" in namespace "e2e-tests-emptydir-cnkjl" to be "success or failure" May 20 11:35:03.954: INFO: Pod "pod-f292cfeb-9a8d-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 30.095906ms May 20 11:35:05.958: INFO: Pod "pod-f292cfeb-9a8d-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03411019s May 20 11:35:08.056: INFO: Pod "pod-f292cfeb-9a8d-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.132100964s STEP: Saw pod success May 20 11:35:08.056: INFO: Pod "pod-f292cfeb-9a8d-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:35:08.059: INFO: Trying to get logs from node hunter-worker pod pod-f292cfeb-9a8d-11ea-b520-0242ac110018 container test-container: STEP: delete the pod May 20 11:35:08.298: INFO: Waiting for pod pod-f292cfeb-9a8d-11ea-b520-0242ac110018 to disappear May 20 11:35:08.487: INFO: Pod pod-f292cfeb-9a8d-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:35:08.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-cnkjl" for this suite. May 20 11:35:14.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:35:14.594: INFO: namespace: e2e-tests-emptydir-cnkjl, resource: bindings, ignored listing per whitelist May 20 11:35:14.624: INFO: namespace e2e-tests-emptydir-cnkjl deletion completed in 6.132115305s • [SLOW TEST:10.845 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:35:14.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f90f0a5b-9a8d-11ea-b520-0242ac110018 STEP: Creating a pod to test consume secrets May 20 11:35:14.811: INFO: Waiting up to 5m0s for pod "pod-secrets-f90fa44a-9a8d-11ea-b520-0242ac110018" in namespace "e2e-tests-secrets-gt2pb" to be "success or failure" May 20 11:35:14.936: INFO: Pod "pod-secrets-f90fa44a-9a8d-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 125.239474ms May 20 11:35:16.940: INFO: Pod "pod-secrets-f90fa44a-9a8d-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129216381s May 20 11:35:18.944: INFO: Pod "pod-secrets-f90fa44a-9a8d-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.133439641s STEP: Saw pod success May 20 11:35:18.944: INFO: Pod "pod-secrets-f90fa44a-9a8d-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:35:18.947: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-f90fa44a-9a8d-11ea-b520-0242ac110018 container secret-volume-test: STEP: delete the pod May 20 11:35:19.010: INFO: Waiting for pod pod-secrets-f90fa44a-9a8d-11ea-b520-0242ac110018 to disappear May 20 11:35:19.015: INFO: Pod pod-secrets-f90fa44a-9a8d-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:35:19.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-gt2pb" for this suite. May 20 11:35:25.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:35:25.064: INFO: namespace: e2e-tests-secrets-gt2pb, resource: bindings, ignored listing per whitelist May 20 11:35:25.183: INFO: namespace e2e-tests-secrets-gt2pb deletion completed in 6.164180199s • [SLOW TEST:10.558 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:35:25.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-bl92m STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-bl92m to expose endpoints map[] May 20 11:35:25.363: INFO: Get endpoints failed (11.913522ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 20 11:35:26.367: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-bl92m exposes endpoints map[] (1.015913104s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-bl92m STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-bl92m to expose endpoints map[pod1:[80]] May 20 11:35:30.424: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-bl92m exposes endpoints map[pod1:[80]] (4.05096647s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-bl92m STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-bl92m to expose endpoints map[pod1:[80] pod2:[80]] May 20 11:35:33.476: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-bl92m exposes endpoints map[pod1:[80] pod2:[80]] (3.04898514s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-bl92m STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-bl92m to expose endpoints map[pod2:[80]] May 20 11:35:34.585: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-bl92m exposes endpoints map[pod2:[80]] (1.106019739s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-bl92m STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-bl92m to expose endpoints map[] May 20 11:35:34.599: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-bl92m exposes endpoints map[] (5.937596ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:35:34.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-bl92m" for this suite. May 20 11:35:58.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:35:58.726: INFO: namespace: e2e-tests-services-bl92m, resource: bindings, ignored listing per whitelist May 20 11:35:58.729: INFO: namespace e2e-tests-services-bl92m deletion completed in 24.081726449s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:33.546 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:35:58.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-134f0214-9a8e-11ea-b520-0242ac110018 STEP: Creating a pod to test consume configMaps May 20 11:35:58.872: INFO: Waiting up to 5m0s for pod "pod-configmaps-1350b349-9a8e-11ea-b520-0242ac110018" in namespace "e2e-tests-configmap-7slfk" to be "success or failure" May 20 11:35:58.903: INFO: Pod "pod-configmaps-1350b349-9a8e-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 30.868739ms May 20 11:36:00.907: INFO: Pod "pod-configmaps-1350b349-9a8e-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0353252s May 20 11:36:02.911: INFO: Pod "pod-configmaps-1350b349-9a8e-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038731835s STEP: Saw pod success May 20 11:36:02.911: INFO: Pod "pod-configmaps-1350b349-9a8e-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:36:02.913: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-1350b349-9a8e-11ea-b520-0242ac110018 container configmap-volume-test: STEP: delete the pod May 20 11:36:02.949: INFO: Waiting for pod pod-configmaps-1350b349-9a8e-11ea-b520-0242ac110018 to disappear May 20 11:36:03.042: INFO: Pod pod-configmaps-1350b349-9a8e-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:36:03.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7slfk" for this suite. May 20 11:36:09.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:36:09.102: INFO: namespace: e2e-tests-configmap-7slfk, resource: bindings, ignored listing per whitelist May 20 11:36:09.133: INFO: namespace e2e-tests-configmap-7slfk deletion completed in 6.086255296s • [SLOW TEST:10.404 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:36:09.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:37:09.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-hglqx" for this suite. May 20 11:37:31.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:37:31.465: INFO: namespace: e2e-tests-container-probe-hglqx, resource: bindings, ignored listing per whitelist May 20 11:37:31.514: INFO: namespace e2e-tests-container-probe-hglqx deletion completed in 22.095183701s • [SLOW TEST:82.381 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:37:31.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-4a9eee45-9a8e-11ea-b520-0242ac110018 STEP: Creating a pod to test consume configMaps May 20 11:37:31.639: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4aa0e762-9a8e-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-mc9xn" to be "success or failure" May 20 11:37:31.642: INFO: Pod "pod-projected-configmaps-4aa0e762-9a8e-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.713741ms May 20 11:37:33.647: INFO: Pod "pod-projected-configmaps-4aa0e762-9a8e-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007960277s May 20 11:37:35.651: INFO: Pod "pod-projected-configmaps-4aa0e762-9a8e-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012392301s STEP: Saw pod success May 20 11:37:35.651: INFO: Pod "pod-projected-configmaps-4aa0e762-9a8e-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:37:35.654: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-4aa0e762-9a8e-11ea-b520-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 20 11:37:35.704: INFO: Waiting for pod pod-projected-configmaps-4aa0e762-9a8e-11ea-b520-0242ac110018 to disappear May 20 11:37:35.714: INFO: Pod pod-projected-configmaps-4aa0e762-9a8e-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:37:35.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mc9xn" for this suite. May 20 11:37:41.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:37:41.758: INFO: namespace: e2e-tests-projected-mc9xn, resource: bindings, ignored listing per whitelist May 20 11:37:41.814: INFO: namespace e2e-tests-projected-mc9xn deletion completed in 6.095113053s • [SLOW TEST:10.299 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:37:41.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 20 11:37:41.917: INFO: Creating deployment "nginx-deployment" May 20 11:37:41.920: INFO: Waiting for observed generation 1 May 20 11:37:43.945: INFO: Waiting for all required pods to come up May 20 11:37:43.950: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 20 11:37:56.090: INFO: Waiting for deployment "nginx-deployment" to complete May 20 11:37:56.096: INFO: Updating deployment "nginx-deployment" with a non-existent image May 20 11:37:56.103: INFO: Updating deployment nginx-deployment May 20 11:37:56.103: INFO: Waiting for observed generation 2 May 20 11:37:58.298: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 20 11:37:58.301: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 20 11:37:58.303: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 20 11:37:58.310: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 20 11:37:58.310: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 20 11:37:58.311: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 20 11:37:58.320: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 20 11:37:58.320: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 20 11:37:58.324: INFO: Updating deployment nginx-deployment May 20 11:37:58.325: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 20 11:37:58.580: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 20 11:37:58.756: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 20 11:37:59.162: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-h5rrp/deployments/nginx-deployment,UID:50c29f18-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567188,Generation:3,CreationTimestamp:2020-05-20 11:37:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-05-20 11:37:56 +0000 UTC 2020-05-20 11:37:41 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-05-20 11:37:58 +0000 UTC 2020-05-20 11:37:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} May 20 11:37:59.303: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-h5rrp/replicasets/nginx-deployment-5c98f8fb5,UID:593738cd-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567203,Generation:3,CreationTimestamp:2020-05-20 11:37:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 50c29f18-9a8e-11ea-99e8-0242ac110002 0xc00251bf27 0xc00251bf28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 20 11:37:59.303: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 20 11:37:59.303: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-h5rrp/replicasets/nginx-deployment-85ddf47c5d,UID:50c68018-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567189,Generation:3,CreationTimestamp:2020-05-20 11:37:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 50c29f18-9a8e-11ea-99e8-0242ac110002 0xc00251bff7 0xc00251bff8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 20 11:37:59.381: INFO: Pod "nginx-deployment-5c98f8fb5-25tv2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-25tv2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-5c98f8fb5-25tv2,UID:5ae5b1a2-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567195,Generation:0,CreationTimestamp:2020-05-20 11:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 593738cd-9a8e-11ea-99e8-0242ac110002 0xc0024632b7 0xc0024632b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002463340} {node.kubernetes.io/unreachable Exists NoExecute 0xc002463360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.382: INFO: Pod "nginx-deployment-5c98f8fb5-5mvcw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5mvcw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-5c98f8fb5-5mvcw,UID:5ae5a99c-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567197,Generation:0,CreationTimestamp:2020-05-20 11:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 593738cd-9a8e-11ea-99e8-0242ac110002 0xc0024633d7 0xc0024633d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002463450} {node.kubernetes.io/unreachable Exists NoExecute 0xc002463470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.382: INFO: Pod "nginx-deployment-5c98f8fb5-64vvk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-64vvk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-5c98f8fb5-64vvk,UID:593928a4-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567130,Generation:0,CreationTimestamp:2020-05-20 11:37:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 593738cd-9a8e-11ea-99e8-0242ac110002 0xc0024634e7 0xc0024634e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002463590} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024635b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:56 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-20 11:37:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.382: INFO: Pod "nginx-deployment-5c98f8fb5-66x9h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-66x9h,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-5c98f8fb5-66x9h,UID:59381659-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567114,Generation:0,CreationTimestamp:2020-05-20 11:37:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 593738cd-9a8e-11ea-99e8-0242ac110002 0xc002463677 0xc002463678}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024636f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002463710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:56 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-20 11:37:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.383: INFO: Pod "nginx-deployment-5c98f8fb5-cvqwl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-cvqwl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-5c98f8fb5-cvqwl,UID:5acbeb79-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567216,Generation:0,CreationTimestamp:2020-05-20 11:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 593738cd-9a8e-11ea-99e8-0242ac110002 0xc002463867 0xc002463868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024638e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002463900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-20 11:37:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.383: INFO: Pod "nginx-deployment-5c98f8fb5-d786d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-d786d,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-5c98f8fb5-d786d,UID:5ae5ba58-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567192,Generation:0,CreationTimestamp:2020-05-20 11:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 593738cd-9a8e-11ea-99e8-0242ac110002 0xc002463a77 0xc002463a78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002463af0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002463b10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.383: INFO: Pod "nginx-deployment-5c98f8fb5-jsqfm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jsqfm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-5c98f8fb5-jsqfm,UID:5aeb1411-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567202,Generation:0,CreationTimestamp:2020-05-20 11:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 593738cd-9a8e-11ea-99e8-0242ac110002 0xc002463bf7 0xc002463bf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002463c70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002463c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:59 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.383: INFO: Pod "nginx-deployment-5c98f8fb5-nqdnp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nqdnp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-5c98f8fb5-nqdnp,UID:5ae5aa2a-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567199,Generation:0,CreationTimestamp:2020-05-20 11:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 593738cd-9a8e-11ea-99e8-0242ac110002 0xc002463d07 0xc002463d08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002463e10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002463e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.383: INFO: Pod "nginx-deployment-5c98f8fb5-q5vhh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-q5vhh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-5c98f8fb5-q5vhh,UID:5ab1296a-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567207,Generation:0,CreationTimestamp:2020-05-20 11:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 593738cd-9a8e-11ea-99e8-0242ac110002 0xc002463ea7 0xc002463ea8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002463f20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002463f40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-20 11:37:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.384: INFO: Pod "nginx-deployment-5c98f8fb5-qgwrh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qgwrh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-5c98f8fb5-qgwrh,UID:59624ab3-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567139,Generation:0,CreationTimestamp:2020-05-20 11:37:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 593738cd-9a8e-11ea-99e8-0242ac110002 0xc00246c0c7 0xc00246c0c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246c140} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246c160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:56 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-20 11:37:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.384: INFO: Pod "nginx-deployment-5c98f8fb5-sbjgh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sbjgh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-5c98f8fb5-sbjgh,UID:59391a75-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567117,Generation:0,CreationTimestamp:2020-05-20 11:37:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 593738cd-9a8e-11ea-99e8-0242ac110002 0xc00246c277 0xc00246c278}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246c2f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246c310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:56 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-20 11:37:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.384: INFO: Pod "nginx-deployment-5c98f8fb5-zf8qx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zf8qx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-5c98f8fb5-zf8qx,UID:59652ecf-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567140,Generation:0,CreationTimestamp:2020-05-20 11:37:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 593738cd-9a8e-11ea-99e8-0242ac110002 0xc00246c417 0xc00246c418}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246c490} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246c4b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:56 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-20 11:37:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.384: INFO: Pod "nginx-deployment-5c98f8fb5-zqcqr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zqcqr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-5c98f8fb5-zqcqr,UID:5acc29f1-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567175,Generation:0,CreationTimestamp:2020-05-20 11:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 593738cd-9a8e-11ea-99e8-0242ac110002 0xc00246c5a7 0xc00246c5a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246c620} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246c640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.384: INFO: Pod "nginx-deployment-85ddf47c5d-4tr8d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4tr8d,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-85ddf47c5d-4tr8d,UID:5ae5ac73-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567194,Generation:0,CreationTimestamp:2020-05-20 11:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50c68018-9a8e-11ea-99e8-0242ac110002 0xc00246c6b7 0xc00246c6b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246c7a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246c7c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.384: INFO: Pod "nginx-deployment-85ddf47c5d-5d4l8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5d4l8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-85ddf47c5d-5d4l8,UID:5ab1409c-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567165,Generation:0,CreationTimestamp:2020-05-20 11:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50c68018-9a8e-11ea-99e8-0242ac110002 0xc00246c837 0xc00246c838}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246c930} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246c950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.385: INFO: Pod "nginx-deployment-85ddf47c5d-8g6tq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8g6tq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-85ddf47c5d-8g6tq,UID:50ceedea-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567041,Generation:0,CreationTimestamp:2020-05-20 11:37:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50c68018-9a8e-11ea-99e8-0242ac110002 0xc00246c9d7 0xc00246c9d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246cb90} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246cbb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.90,StartTime:2020-05-20 11:37:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-20 11:37:49 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://221cebd8b9efce012236e93d2a9d365beb917ab833795064b1397c5f5c1c1dd4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.385: INFO: Pod "nginx-deployment-85ddf47c5d-99xks" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-99xks,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-85ddf47c5d-99xks,UID:50dd5991-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567070,Generation:0,CreationTimestamp:2020-05-20 11:37:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50c68018-9a8e-11ea-99e8-0242ac110002 0xc00246cc77 0xc00246cc78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246ce30} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246ce50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.92,StartTime:2020-05-20 11:37:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-20 11:37:53 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ea71bfa4edee8ada1d13a931fade34c33344c9d7803c4166aff826d207318f1c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.385: INFO: Pod "nginx-deployment-85ddf47c5d-9mbjx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9mbjx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-85ddf47c5d-9mbjx,UID:50ca0c6f-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567023,Generation:0,CreationTimestamp:2020-05-20 11:37:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50c68018-9a8e-11ea-99e8-0242ac110002 0xc00246cf27 0xc00246cf28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246d000} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246d020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.89,StartTime:2020-05-20 11:37:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-20 11:37:45 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6c482535566fb9bf6fc9063fa0c422d2717f945d6d69b0859efeec87a77cea42}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.385: INFO: Pod "nginx-deployment-85ddf47c5d-9nwgv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9nwgv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-85ddf47c5d-9nwgv,UID:5ab155ce-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567213,Generation:0,CreationTimestamp:2020-05-20 11:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50c68018-9a8e-11ea-99e8-0242ac110002 0xc00246d0f7 0xc00246d0f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246d170} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246d190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-20 11:37:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.386: INFO: Pod "nginx-deployment-85ddf47c5d-9q9fx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9q9fx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-85ddf47c5d-9q9fx,UID:50cc9c5c-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567056,Generation:0,CreationTimestamp:2020-05-20 11:37:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50c68018-9a8e-11ea-99e8-0242ac110002 0xc00246d257 0xc00246d258}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246d2d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246d2f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.114,StartTime:2020-05-20 11:37:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-20 11:37:52 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://581aa5be5565bc1fb48fdfbca7a963ef18fbf474448f63dcad82c7d63d019300}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.386: INFO: Pod "nginx-deployment-85ddf47c5d-9wm4m" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9wm4m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-85ddf47c5d-9wm4m,UID:50cf00dc-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567075,Generation:0,CreationTimestamp:2020-05-20 11:37:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50c68018-9a8e-11ea-99e8-0242ac110002 0xc00246d3b7 0xc00246d3b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246d440} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246d460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.115,StartTime:2020-05-20 11:37:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-20 11:37:54 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://078398c48a129a918d56dcf70560b2db7f924683267e9f12b34cd4b8979d87f6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.386: INFO: Pod "nginx-deployment-85ddf47c5d-c8jtz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-c8jtz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-85ddf47c5d-c8jtz,UID:5acc1374-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567176,Generation:0,CreationTimestamp:2020-05-20 11:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50c68018-9a8e-11ea-99e8-0242ac110002 0xc00246d527 0xc00246d528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246d5a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246d5c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.386: INFO: Pod "nginx-deployment-85ddf47c5d-jpp2l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jpp2l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-85ddf47c5d-jpp2l,UID:5ae587ec-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567191,Generation:0,CreationTimestamp:2020-05-20 11:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50c68018-9a8e-11ea-99e8-0242ac110002 0xc00246d637 0xc00246d638}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246d6b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246d6e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.386: INFO: Pod "nginx-deployment-85ddf47c5d-md89d" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-md89d,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-85ddf47c5d-md89d,UID:50cc982b-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567048,Generation:0,CreationTimestamp:2020-05-20 11:37:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50c68018-9a8e-11ea-99e8-0242ac110002 0xc00246d757 0xc00246d758}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246d7d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246d7f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.113,StartTime:2020-05-20 11:37:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-20 11:37:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://dc26dda20f808cd23d016754bea51c197877f6700e99f1c4e93b1218205943a3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.386: INFO: Pod "nginx-deployment-85ddf47c5d-q72sk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-q72sk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-85ddf47c5d-q72sk,UID:50cefe74-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567052,Generation:0,CreationTimestamp:2020-05-20 11:37:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50c68018-9a8e-11ea-99e8-0242ac110002 0xc00246d8c7 0xc00246d8c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246d940} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246d960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.91,StartTime:2020-05-20 11:37:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-20 11:37:49 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://585de186531e5119bd17b958f9091e9923e4d2a9ee89ff2d011fd2437471c1eb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.387: INFO: Pod "nginx-deployment-85ddf47c5d-rsq49" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rsq49,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-85ddf47c5d-rsq49,UID:50cef52a-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567080,Generation:0,CreationTimestamp:2020-05-20 11:37:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50c68018-9a8e-11ea-99e8-0242ac110002 0xc00246da27 0xc00246da28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246daa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246dac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.93,StartTime:2020-05-20 11:37:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-20 11:37:54 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://66357e784211047429430d0da48e742c6012baba5cc8c3504175503962077f8d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.387: INFO: Pod "nginx-deployment-85ddf47c5d-sg8s5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sg8s5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-85ddf47c5d-sg8s5,UID:5ae5aa79-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567193,Generation:0,CreationTimestamp:2020-05-20 11:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50c68018-9a8e-11ea-99e8-0242ac110002 0xc00246db97 0xc00246db98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246dc10} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246dc30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.387: INFO: Pod "nginx-deployment-85ddf47c5d-sjd2h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sjd2h,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-85ddf47c5d-sjd2h,UID:5acc2ae4-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567172,Generation:0,CreationTimestamp:2020-05-20 11:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50c68018-9a8e-11ea-99e8-0242ac110002 0xc00246dcb7 0xc00246dcb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246dd30} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246dd50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.387: INFO: Pod "nginx-deployment-85ddf47c5d-spgm6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-spgm6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-85ddf47c5d-spgm6,UID:5a90f7e3-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567201,Generation:0,CreationTimestamp:2020-05-20 11:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50c68018-9a8e-11ea-99e8-0242ac110002 0xc00246ddc7 0xc00246ddc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246de40} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246de60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-20 11:37:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.387: INFO: Pod "nginx-deployment-85ddf47c5d-ssn9s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ssn9s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-85ddf47c5d-ssn9s,UID:5acc2770-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567177,Generation:0,CreationTimestamp:2020-05-20 11:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50c68018-9a8e-11ea-99e8-0242ac110002 0xc00246df17 0xc00246df18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00246df90} {node.kubernetes.io/unreachable Exists NoExecute 0xc00246dfb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.387: INFO: Pod "nginx-deployment-85ddf47c5d-vztkz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vztkz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-85ddf47c5d-vztkz,UID:5ae55856-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567196,Generation:0,CreationTimestamp:2020-05-20 11:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50c68018-9a8e-11ea-99e8-0242ac110002 0xc0026d6027 0xc0026d6028}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026d60a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026d60c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.387: INFO: Pod "nginx-deployment-85ddf47c5d-wvvzf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wvvzf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-85ddf47c5d-wvvzf,UID:5acc0452-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567174,Generation:0,CreationTimestamp:2020-05-20 11:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50c68018-9a8e-11ea-99e8-0242ac110002 0xc0026d6137 0xc0026d6138}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026d61b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026d61d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 20 11:37:59.388: INFO: Pod "nginx-deployment-85ddf47c5d-z4nzw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-z4nzw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-h5rrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h5rrp/pods/nginx-deployment-85ddf47c5d-z4nzw,UID:5ae5b493-9a8e-11ea-99e8-0242ac110002,ResourceVersion:11567198,Generation:0,CreationTimestamp:2020-05-20 11:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 50c68018-9a8e-11ea-99e8-0242ac110002 0xc0026d6247 0xc0026d6248}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56t96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56t96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56t96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026d62c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026d62e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 11:37:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:37:59.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-h5rrp" for this suite. May 20 11:38:35.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:38:35.641: INFO: namespace: e2e-tests-deployment-h5rrp, resource: bindings, ignored listing per whitelist May 20 11:38:35.694: INFO: namespace e2e-tests-deployment-h5rrp deletion completed in 36.209749094s • [SLOW TEST:53.880 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:38:35.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-70dbea69-9a8e-11ea-b520-0242ac110018 STEP: Creating a pod to test consume configMaps May 20 11:38:35.824: INFO: Waiting up to 5m0s for pod "pod-configmaps-70df8ddc-9a8e-11ea-b520-0242ac110018" in namespace "e2e-tests-configmap-hdqrd" to be "success or failure" May 20 11:38:35.879: INFO: Pod "pod-configmaps-70df8ddc-9a8e-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 54.632853ms May 20 11:38:38.095: INFO: Pod "pod-configmaps-70df8ddc-9a8e-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.271002579s May 20 11:38:40.100: INFO: Pod "pod-configmaps-70df8ddc-9a8e-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.275658442s May 20 11:38:42.104: INFO: Pod "pod-configmaps-70df8ddc-9a8e-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.280240953s STEP: Saw pod success May 20 11:38:42.104: INFO: Pod "pod-configmaps-70df8ddc-9a8e-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:38:42.108: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-70df8ddc-9a8e-11ea-b520-0242ac110018 container configmap-volume-test: STEP: delete the pod May 20 11:38:42.126: INFO: Waiting for pod pod-configmaps-70df8ddc-9a8e-11ea-b520-0242ac110018 to disappear May 20 11:38:42.130: INFO: Pod pod-configmaps-70df8ddc-9a8e-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:38:42.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hdqrd" for this suite. May 20 11:38:48.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:38:48.160: INFO: namespace: e2e-tests-configmap-hdqrd, resource: bindings, ignored listing per whitelist May 20 11:38:48.223: INFO: namespace e2e-tests-configmap-hdqrd deletion completed in 6.089345792s • [SLOW TEST:12.528 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:38:48.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 20 11:38:48.328: INFO: namespace e2e-tests-kubectl-8vwc7 May 20 11:38:48.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8vwc7' May 20 11:38:48.587: INFO: stderr: "" May 20 11:38:48.587: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 20 11:38:49.610: INFO: Selector matched 1 pods for map[app:redis] May 20 11:38:49.610: INFO: Found 0 / 1 May 20 11:38:50.592: INFO: Selector matched 1 pods for map[app:redis] May 20 11:38:50.592: INFO: Found 0 / 1 May 20 11:38:51.592: INFO: Selector matched 1 pods for map[app:redis] May 20 11:38:51.592: INFO: Found 0 / 1 May 20 11:38:52.592: INFO: Selector matched 1 pods for map[app:redis] May 20 11:38:52.592: INFO: Found 1 / 1 May 20 11:38:52.592: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 20 11:38:52.597: INFO: Selector matched 1 pods for map[app:redis] May 20 11:38:52.597: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 20 11:38:52.597: INFO: wait on redis-master startup in e2e-tests-kubectl-8vwc7 May 20 11:38:52.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5nhl6 redis-master --namespace=e2e-tests-kubectl-8vwc7' May 20 11:38:52.715: INFO: stderr: "" May 20 11:38:52.715: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 May 11:38:51.745 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 May 11:38:51.745 # Server started, Redis version 3.2.12\n1:M 20 May 11:38:51.746 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 May 11:38:51.746 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 20 11:38:52.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-8vwc7' May 20 11:38:52.880: INFO: stderr: "" May 20 11:38:52.880: INFO: stdout: "service/rm2 exposed\n" May 20 11:38:52.897: INFO: Service rm2 in namespace e2e-tests-kubectl-8vwc7 found. STEP: exposing service May 20 11:38:54.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-8vwc7' May 20 11:38:55.038: INFO: stderr: "" May 20 11:38:55.038: INFO: stdout: "service/rm3 exposed\n" May 20 11:38:55.059: INFO: Service rm3 in namespace e2e-tests-kubectl-8vwc7 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:38:57.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8vwc7" for this suite. May 20 11:39:19.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:39:19.206: INFO: namespace: e2e-tests-kubectl-8vwc7, resource: bindings, ignored listing per whitelist May 20 11:39:19.216: INFO: namespace e2e-tests-kubectl-8vwc7 deletion completed in 22.097141092s • [SLOW TEST:30.993 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:39:19.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-8ace1da0-9a8e-11ea-b520-0242ac110018 STEP: Creating a pod to test consume configMaps May 20 11:39:19.320: INFO: Waiting up to 5m0s for pod "pod-configmaps-8acfb5d5-9a8e-11ea-b520-0242ac110018" in namespace "e2e-tests-configmap-29rqn" to be "success or failure" May 20 11:39:19.337: INFO: Pod "pod-configmaps-8acfb5d5-9a8e-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.551989ms May 20 11:39:21.359: INFO: Pod "pod-configmaps-8acfb5d5-9a8e-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038679342s May 20 11:39:23.363: INFO: Pod "pod-configmaps-8acfb5d5-9a8e-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043217058s STEP: Saw pod success May 20 11:39:23.363: INFO: Pod "pod-configmaps-8acfb5d5-9a8e-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:39:23.367: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-8acfb5d5-9a8e-11ea-b520-0242ac110018 container configmap-volume-test: STEP: delete the pod May 20 11:39:23.387: INFO: Waiting for pod pod-configmaps-8acfb5d5-9a8e-11ea-b520-0242ac110018 to disappear May 20 11:39:23.389: INFO: Pod pod-configmaps-8acfb5d5-9a8e-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:39:23.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-29rqn" for this suite. May 20 11:39:29.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:39:29.432: INFO: namespace: e2e-tests-configmap-29rqn, resource: bindings, ignored listing per whitelist May 20 11:39:29.480: INFO: namespace e2e-tests-configmap-29rqn deletion completed in 6.088824925s • [SLOW TEST:10.264 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:39:29.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 20 11:39:37.674: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 11:39:37.797: INFO: Pod pod-with-prestop-http-hook still exists May 20 11:39:39.797: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 11:39:39.827: INFO: Pod pod-with-prestop-http-hook still exists May 20 11:39:41.797: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 11:39:41.802: INFO: Pod pod-with-prestop-http-hook still exists May 20 11:39:43.797: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 11:39:43.802: INFO: Pod pod-with-prestop-http-hook still exists May 20 11:39:45.797: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 11:39:45.801: INFO: Pod pod-with-prestop-http-hook still exists May 20 11:39:47.797: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 11:39:47.801: INFO: Pod pod-with-prestop-http-hook still exists May 20 11:39:49.797: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 11:39:49.801: INFO: Pod pod-with-prestop-http-hook still exists May 20 11:39:51.797: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 20 11:39:51.801: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:39:51.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-9hdx6" for this suite. May 20 11:40:13.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:40:13.851: INFO: namespace: e2e-tests-container-lifecycle-hook-9hdx6, resource: bindings, ignored listing per whitelist May 20 11:40:13.920: INFO: namespace e2e-tests-container-lifecycle-hook-9hdx6 deletion completed in 22.108198712s • [SLOW TEST:44.440 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:40:13.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0520 11:40:15.097536 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 20 11:40:15.097: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:40:15.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-646hb" for this suite. May 20 11:40:21.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:40:21.207: INFO: namespace: e2e-tests-gc-646hb, resource: bindings, ignored listing per whitelist May 20 11:40:21.242: INFO: namespace e2e-tests-gc-646hb deletion completed in 6.131224758s • [SLOW TEST:7.322 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:40:21.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-q9qfq A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-q9qfq;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-q9qfq A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-q9qfq;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-q9qfq.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-q9qfq.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-q9qfq.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-q9qfq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-q9qfq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-q9qfq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-q9qfq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-q9qfq.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-q9qfq.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 27.144.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.144.27_udp@PTR;check="$$(dig +tcp +noall +answer +search 27.144.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.144.27_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-q9qfq A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-q9qfq;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-q9qfq A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-q9qfq.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-q9qfq.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-q9qfq.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-q9qfq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-q9qfq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-q9qfq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-q9qfq.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-q9qfq.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 27.144.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.144.27_udp@PTR;check="$$(dig +tcp +noall +answer +search 27.144.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.144.27_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 20 11:40:29.510: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:29.519: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-q9qfq from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:29.554: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:29.558: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:29.561: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-q9qfq from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:29.564: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:29.567: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:29.570: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:29.574: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:29.577: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:29.597: INFO: Lookups using e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-q9qfq jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-q9qfq jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq jessie_udp@dns-test-service.e2e-tests-dns-q9qfq.svc jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc] May 20 11:40:34.602: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:34.612: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-q9qfq from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:34.644: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:34.647: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:34.650: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-q9qfq from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:34.653: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:34.656: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:34.660: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:34.663: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:34.666: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:34.687: INFO: Lookups using e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-q9qfq jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-q9qfq jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq jessie_udp@dns-test-service.e2e-tests-dns-q9qfq.svc jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc] May 20 11:40:39.601: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:39.608: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-q9qfq from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:39.635: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:39.638: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:39.640: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-q9qfq from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:39.643: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:39.645: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:39.648: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:39.651: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:39.654: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:39.674: INFO: Lookups using e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-q9qfq jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-q9qfq jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq jessie_udp@dns-test-service.e2e-tests-dns-q9qfq.svc jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc] May 20 11:40:44.602: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:44.612: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-q9qfq from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:44.642: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:44.645: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:44.647: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-q9qfq from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:44.649: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:44.652: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:44.655: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:44.658: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:44.661: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:44.677: INFO: Lookups using e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-q9qfq jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-q9qfq jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq jessie_udp@dns-test-service.e2e-tests-dns-q9qfq.svc jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc] May 20 11:40:49.601: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:49.614: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-q9qfq from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:49.642: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:49.644: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:49.647: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-q9qfq from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:49.649: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:49.652: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:49.654: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:49.657: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:49.659: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:49.675: INFO: Lookups using e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-q9qfq jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-q9qfq jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq jessie_udp@dns-test-service.e2e-tests-dns-q9qfq.svc jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc] May 20 11:40:54.602: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:54.612: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-q9qfq from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:54.645: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:54.648: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:54.651: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-q9qfq from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:54.653: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:54.656: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:54.659: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:54.662: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:54.665: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc from pod e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018: the server could not find the requested resource (get pods dns-test-afcf792c-9a8e-11ea-b520-0242ac110018) May 20 11:40:54.682: INFO: Lookups using e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-q9qfq jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-q9qfq jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq jessie_udp@dns-test-service.e2e-tests-dns-q9qfq.svc jessie_tcp@dns-test-service.e2e-tests-dns-q9qfq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q9qfq.svc] May 20 11:40:59.685: INFO: DNS probes using e2e-tests-dns-q9qfq/dns-test-afcf792c-9a8e-11ea-b520-0242ac110018 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:40:59.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-q9qfq" for this suite. May 20 11:41:05.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:41:05.875: INFO: namespace: e2e-tests-dns-q9qfq, resource: bindings, ignored listing per whitelist May 20 11:41:05.943: INFO: namespace e2e-tests-dns-q9qfq deletion completed in 6.125060923s • [SLOW TEST:44.701 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:41:05.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs May 20 11:41:06.016: INFO: Waiting up to 5m0s for pod "pod-ca689539-9a8e-11ea-b520-0242ac110018" in namespace "e2e-tests-emptydir-vmnrz" to be "success or failure" May 20 11:41:06.037: INFO: Pod "pod-ca689539-9a8e-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 20.986179ms May 20 11:41:08.042: INFO: Pod "pod-ca689539-9a8e-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025288727s May 20 11:41:10.046: INFO: Pod "pod-ca689539-9a8e-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029480573s STEP: Saw pod success May 20 11:41:10.046: INFO: Pod "pod-ca689539-9a8e-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:41:10.049: INFO: Trying to get logs from node hunter-worker2 pod pod-ca689539-9a8e-11ea-b520-0242ac110018 container test-container: STEP: delete the pod May 20 11:41:10.082: INFO: Waiting for pod pod-ca689539-9a8e-11ea-b520-0242ac110018 to disappear May 20 11:41:10.092: INFO: Pod pod-ca689539-9a8e-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:41:10.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-vmnrz" for this suite. May 20 11:41:16.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:41:16.158: INFO: namespace: e2e-tests-emptydir-vmnrz, resource: bindings, ignored listing per whitelist May 20 11:41:16.215: INFO: namespace e2e-tests-emptydir-vmnrz deletion completed in 6.118720925s • [SLOW TEST:10.271 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:41:16.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 20 11:41:16.346: INFO: Waiting up to 5m0s for pod "downward-api-d08fe813-9a8e-11ea-b520-0242ac110018" in namespace "e2e-tests-downward-api-mtrjl" to be "success or failure" May 20 11:41:16.370: INFO: Pod "downward-api-d08fe813-9a8e-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.952655ms May 20 11:41:18.457: INFO: Pod "downward-api-d08fe813-9a8e-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111513899s May 20 11:41:20.462: INFO: Pod "downward-api-d08fe813-9a8e-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116098362s STEP: Saw pod success May 20 11:41:20.462: INFO: Pod "downward-api-d08fe813-9a8e-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:41:20.465: INFO: Trying to get logs from node hunter-worker2 pod downward-api-d08fe813-9a8e-11ea-b520-0242ac110018 container dapi-container: STEP: delete the pod May 20 11:41:20.628: INFO: Waiting for pod downward-api-d08fe813-9a8e-11ea-b520-0242ac110018 to disappear May 20 11:41:20.638: INFO: Pod downward-api-d08fe813-9a8e-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:41:20.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mtrjl" for this suite. May 20 11:41:26.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:41:26.716: INFO: namespace: e2e-tests-downward-api-mtrjl, resource: bindings, ignored listing per whitelist May 20 11:41:26.737: INFO: namespace e2e-tests-downward-api-mtrjl deletion completed in 6.095576954s • [SLOW TEST:10.522 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:41:26.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 20 11:41:26.923: INFO: Waiting up to 5m0s for pod "pod-d6dab93f-9a8e-11ea-b520-0242ac110018" in namespace "e2e-tests-emptydir-2dn5w" to be "success or failure" May 20 11:41:26.926: INFO: Pod "pod-d6dab93f-9a8e-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.037664ms May 20 11:41:28.931: INFO: Pod "pod-d6dab93f-9a8e-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008000475s May 20 11:41:30.935: INFO: Pod "pod-d6dab93f-9a8e-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012412306s STEP: Saw pod success May 20 11:41:30.935: INFO: Pod "pod-d6dab93f-9a8e-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:41:30.938: INFO: Trying to get logs from node hunter-worker pod pod-d6dab93f-9a8e-11ea-b520-0242ac110018 container test-container: STEP: delete the pod May 20 11:41:30.969: INFO: Waiting for pod pod-d6dab93f-9a8e-11ea-b520-0242ac110018 to disappear May 20 11:41:30.973: INFO: Pod pod-d6dab93f-9a8e-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:41:30.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2dn5w" for this suite. May 20 11:41:36.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:41:37.044: INFO: namespace: e2e-tests-emptydir-2dn5w, resource: bindings, ignored listing per whitelist May 20 11:41:37.068: INFO: namespace e2e-tests-emptydir-2dn5w deletion completed in 6.090901308s • [SLOW TEST:10.331 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:41:37.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-dcf8ebdd-9a8e-11ea-b520-0242ac110018 STEP: Creating a pod to test consume configMaps May 20 11:41:37.233: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dd010483-9a8e-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-4rkhm" to be "success or failure" May 20 11:41:37.273: INFO: Pod "pod-projected-configmaps-dd010483-9a8e-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 40.165686ms May 20 11:41:39.276: INFO: Pod "pod-projected-configmaps-dd010483-9a8e-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04290604s May 20 11:41:41.280: INFO: Pod "pod-projected-configmaps-dd010483-9a8e-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046802742s STEP: Saw pod success May 20 11:41:41.280: INFO: Pod "pod-projected-configmaps-dd010483-9a8e-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:41:41.283: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-dd010483-9a8e-11ea-b520-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 20 11:41:41.310: INFO: Waiting for pod pod-projected-configmaps-dd010483-9a8e-11ea-b520-0242ac110018 to disappear May 20 11:41:41.322: INFO: Pod pod-projected-configmaps-dd010483-9a8e-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:41:41.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4rkhm" for this suite. May 20 11:41:47.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:41:47.387: INFO: namespace: e2e-tests-projected-4rkhm, resource: bindings, ignored listing per whitelist May 20 11:41:47.429: INFO: namespace e2e-tests-projected-4rkhm deletion completed in 6.102824954s • [SLOW TEST:10.361 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:41:47.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 20 11:41:47.488: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:41:51.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-fw7kc" for this suite. May 20 11:42:33.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:42:33.628: INFO: namespace: e2e-tests-pods-fw7kc, resource: bindings, ignored listing per whitelist May 20 11:42:33.630: INFO: namespace e2e-tests-pods-fw7kc deletion completed in 42.092475077s • [SLOW TEST:46.201 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:42:33.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-feafb162-9a8e-11ea-b520-0242ac110018 STEP: Creating secret with name s-test-opt-upd-feafb1b2-9a8e-11ea-b520-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-feafb162-9a8e-11ea-b520-0242ac110018 STEP: Updating secret s-test-opt-upd-feafb1b2-9a8e-11ea-b520-0242ac110018 STEP: Creating secret with name s-test-opt-create-feafb1d3-9a8e-11ea-b520-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:42:41.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-jng7q" for this suite. May 20 11:43:03.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:43:03.888: INFO: namespace: e2e-tests-secrets-jng7q, resource: bindings, ignored listing per whitelist May 20 11:43:03.921: INFO: namespace e2e-tests-secrets-jng7q deletion completed in 22.083947883s • [SLOW TEST:30.291 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:43:03.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 20 11:43:03.997: INFO: Creating ReplicaSet my-hostname-basic-10bc056d-9a8f-11ea-b520-0242ac110018 May 20 11:43:04.030: INFO: Pod name my-hostname-basic-10bc056d-9a8f-11ea-b520-0242ac110018: Found 0 pods out of 1 May 20 11:43:09.034: INFO: Pod name my-hostname-basic-10bc056d-9a8f-11ea-b520-0242ac110018: Found 1 pods out of 1 May 20 11:43:09.034: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-10bc056d-9a8f-11ea-b520-0242ac110018" is running May 20 11:43:09.037: INFO: Pod "my-hostname-basic-10bc056d-9a8f-11ea-b520-0242ac110018-rw6wv" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-20 11:43:04 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-20 11:43:07 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-20 11:43:07 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-20 11:43:04 +0000 UTC Reason: Message:}]) May 20 11:43:09.037: INFO: Trying to dial the pod May 20 11:43:14.100: INFO: Controller my-hostname-basic-10bc056d-9a8f-11ea-b520-0242ac110018: Got expected result from replica 1 [my-hostname-basic-10bc056d-9a8f-11ea-b520-0242ac110018-rw6wv]: "my-hostname-basic-10bc056d-9a8f-11ea-b520-0242ac110018-rw6wv", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:43:14.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-xm5wp" for this suite. May 20 11:43:20.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:43:20.188: INFO: namespace: e2e-tests-replicaset-xm5wp, resource: bindings, ignored listing per whitelist May 20 11:43:20.211: INFO: namespace e2e-tests-replicaset-xm5wp deletion completed in 6.107366309s • [SLOW TEST:16.289 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:43:20.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-1a768a0e-9a8f-11ea-b520-0242ac110018 STEP: Creating a pod to test consume configMaps May 20 11:43:20.331: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a77219b-9a8f-11ea-b520-0242ac110018" in namespace "e2e-tests-configmap-k6qnw" to be "success or failure" May 20 11:43:20.350: INFO: Pod "pod-configmaps-1a77219b-9a8f-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.318804ms May 20 11:43:22.352: INFO: Pod "pod-configmaps-1a77219b-9a8f-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021097678s May 20 11:43:24.356: INFO: Pod "pod-configmaps-1a77219b-9a8f-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025022865s May 20 11:43:26.360: INFO: Pod "pod-configmaps-1a77219b-9a8f-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029203028s STEP: Saw pod success May 20 11:43:26.360: INFO: Pod "pod-configmaps-1a77219b-9a8f-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:43:26.364: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-1a77219b-9a8f-11ea-b520-0242ac110018 container configmap-volume-test: STEP: delete the pod May 20 11:43:26.494: INFO: Waiting for pod pod-configmaps-1a77219b-9a8f-11ea-b520-0242ac110018 to disappear May 20 11:43:26.497: INFO: Pod pod-configmaps-1a77219b-9a8f-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:43:26.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-k6qnw" for this suite. May 20 11:43:32.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:43:32.576: INFO: namespace: e2e-tests-configmap-k6qnw, resource: bindings, ignored listing per whitelist May 20 11:43:32.601: INFO: namespace e2e-tests-configmap-k6qnw deletion completed in 6.101018465s • [SLOW TEST:12.390 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:43:32.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 20 11:43:37.356: INFO: Successfully updated pod "labelsupdate21d5442c-9a8f-11ea-b520-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:43:39.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-k2xgm" for this suite. May 20 11:44:01.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:44:01.428: INFO: namespace: e2e-tests-downward-api-k2xgm, resource: bindings, ignored listing per whitelist May 20 11:44:01.491: INFO: namespace e2e-tests-downward-api-k2xgm deletion completed in 22.095414317s • [SLOW TEST:28.890 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:44:01.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components May 20 11:44:01.594: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 20 11:44:01.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-znpm6' May 20 11:44:07.655: INFO: stderr: "" May 20 11:44:07.655: INFO: stdout: "service/redis-slave created\n" May 20 11:44:07.655: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 20 11:44:07.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-znpm6' May 20 11:44:07.964: INFO: stderr: "" May 20 11:44:07.964: INFO: stdout: "service/redis-master created\n" May 20 11:44:07.964: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 20 11:44:07.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-znpm6' May 20 11:44:08.266: INFO: stderr: "" May 20 11:44:08.266: INFO: stdout: "service/frontend created\n" May 20 11:44:08.266: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 20 11:44:08.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-znpm6' May 20 11:44:09.416: INFO: stderr: "" May 20 11:44:09.416: INFO: stdout: "deployment.extensions/frontend created\n" May 20 11:44:09.416: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 20 11:44:09.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-znpm6' May 20 11:44:09.821: INFO: stderr: "" May 20 11:44:09.821: INFO: stdout: "deployment.extensions/redis-master created\n" May 20 11:44:09.821: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 20 11:44:09.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-znpm6' May 20 11:44:10.205: INFO: stderr: "" May 20 11:44:10.205: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app May 20 11:44:10.205: INFO: Waiting for all frontend pods to be Running. May 20 11:44:20.256: INFO: Waiting for frontend to serve content. May 20 11:44:21.292: INFO: Trying to add a new entry to the guestbook. May 20 11:44:21.325: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 20 11:44:21.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-znpm6' May 20 11:44:21.501: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 11:44:21.501: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 20 11:44:21.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-znpm6' May 20 11:44:21.690: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 11:44:21.690: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 20 11:44:21.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-znpm6' May 20 11:44:21.848: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 11:44:21.848: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 20 11:44:21.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-znpm6' May 20 11:44:21.988: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 11:44:21.988: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources May 20 11:44:21.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-znpm6' May 20 11:44:22.098: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 11:44:22.098: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 20 11:44:22.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-znpm6' May 20 11:44:22.348: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 11:44:22.348: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:44:22.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-znpm6" for this suite. May 20 11:45:02.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:45:02.763: INFO: namespace: e2e-tests-kubectl-znpm6, resource: bindings, ignored listing per whitelist May 20 11:45:02.780: INFO: namespace e2e-tests-kubectl-znpm6 deletion completed in 40.427834979s • [SLOW TEST:61.289 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:45:02.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 20 11:45:10.958: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 11:45:10.968: INFO: Pod pod-with-prestop-exec-hook still exists May 20 11:45:12.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 11:45:12.973: INFO: Pod pod-with-prestop-exec-hook still exists May 20 11:45:14.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 11:45:14.971: INFO: Pod pod-with-prestop-exec-hook still exists May 20 11:45:16.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 11:45:16.974: INFO: Pod pod-with-prestop-exec-hook still exists May 20 11:45:18.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 11:45:18.974: INFO: Pod pod-with-prestop-exec-hook still exists May 20 11:45:20.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 11:45:20.978: INFO: Pod pod-with-prestop-exec-hook still exists May 20 11:45:22.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 11:45:22.974: INFO: Pod pod-with-prestop-exec-hook still exists May 20 11:45:24.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 11:45:24.972: INFO: Pod pod-with-prestop-exec-hook still exists May 20 11:45:26.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 11:45:26.973: INFO: Pod pod-with-prestop-exec-hook still exists May 20 11:45:28.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 11:45:28.973: INFO: Pod pod-with-prestop-exec-hook still exists May 20 11:45:30.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 11:45:30.973: INFO: Pod pod-with-prestop-exec-hook still exists May 20 11:45:32.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 11:45:32.974: INFO: Pod pod-with-prestop-exec-hook still exists May 20 11:45:34.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 11:45:34.973: INFO: Pod pod-with-prestop-exec-hook still exists May 20 11:45:36.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 11:45:36.973: INFO: Pod pod-with-prestop-exec-hook still exists May 20 11:45:38.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 11:45:38.972: INFO: Pod pod-with-prestop-exec-hook still exists May 20 11:45:40.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 11:45:40.972: INFO: Pod pod-with-prestop-exec-hook still exists May 20 11:45:42.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 11:45:42.972: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:45:42.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-ghh49" for this suite. May 20 11:46:04.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:46:05.045: INFO: namespace: e2e-tests-container-lifecycle-hook-ghh49, resource: bindings, ignored listing per whitelist May 20 11:46:05.075: INFO: namespace e2e-tests-container-lifecycle-hook-ghh49 deletion completed in 22.091443604s • [SLOW TEST:62.295 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:46:05.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod May 20 11:46:05.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wjzkc' May 20 11:46:05.441: INFO: stderr: "" May 20 11:46:05.441: INFO: stdout: "pod/pause created\n" May 20 11:46:05.441: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 20 11:46:05.441: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-wjzkc" to be "running and ready" May 20 11:46:05.449: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.746317ms May 20 11:46:07.452: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010839355s May 20 11:46:09.456: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.014781008s May 20 11:46:09.456: INFO: Pod "pause" satisfied condition "running and ready" May 20 11:46:09.456: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod May 20 11:46:09.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-wjzkc' May 20 11:46:09.578: INFO: stderr: "" May 20 11:46:09.578: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 20 11:46:09.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-wjzkc' May 20 11:46:09.673: INFO: stderr: "" May 20 11:46:09.674: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 20 11:46:09.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-wjzkc' May 20 11:46:09.789: INFO: stderr: "" May 20 11:46:09.789: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 20 11:46:09.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-wjzkc' May 20 11:46:09.883: INFO: stderr: "" May 20 11:46:09.883: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources May 20 11:46:09.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wjzkc' May 20 11:46:10.028: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 11:46:10.028: INFO: stdout: "pod \"pause\" force deleted\n" May 20 11:46:10.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-wjzkc' May 20 11:46:10.151: INFO: stderr: "No resources found.\n" May 20 11:46:10.151: INFO: stdout: "" May 20 11:46:10.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-wjzkc -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 20 11:46:10.252: INFO: stderr: "" May 20 11:46:10.252: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:46:10.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wjzkc" for this suite. May 20 11:46:16.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:46:16.337: INFO: namespace: e2e-tests-kubectl-wjzkc, resource: bindings, ignored listing per whitelist May 20 11:46:16.396: INFO: namespace e2e-tests-kubectl-wjzkc deletion completed in 6.141380321s • [SLOW TEST:11.321 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:46:16.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 20 11:46:16.541: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8378a956-9a8f-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-w4769" to be "success or failure" May 20 11:46:16.545: INFO: Pod "downwardapi-volume-8378a956-9a8f-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143658ms May 20 11:46:18.675: INFO: Pod "downwardapi-volume-8378a956-9a8f-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133402356s May 20 11:46:20.678: INFO: Pod "downwardapi-volume-8378a956-9a8f-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.136741934s STEP: Saw pod success May 20 11:46:20.678: INFO: Pod "downwardapi-volume-8378a956-9a8f-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:46:20.680: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-8378a956-9a8f-11ea-b520-0242ac110018 container client-container: STEP: delete the pod May 20 11:46:20.764: INFO: Waiting for pod downwardapi-volume-8378a956-9a8f-11ea-b520-0242ac110018 to disappear May 20 11:46:20.827: INFO: Pod downwardapi-volume-8378a956-9a8f-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:46:20.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-w4769" for this suite. May 20 11:46:26.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:46:26.980: INFO: namespace: e2e-tests-projected-w4769, resource: bindings, ignored listing per whitelist May 20 11:46:26.982: INFO: namespace e2e-tests-projected-w4769 deletion completed in 6.151290753s • [SLOW TEST:10.585 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:46:26.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token May 20 11:46:27.600: INFO: created pod pod-service-account-defaultsa May 20 11:46:27.600: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 20 11:46:27.623: INFO: created pod pod-service-account-mountsa May 20 11:46:27.623: INFO: pod pod-service-account-mountsa service account token volume mount: true May 20 11:46:27.652: INFO: created pod pod-service-account-nomountsa May 20 11:46:27.652: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 20 11:46:27.664: INFO: created pod pod-service-account-defaultsa-mountspec May 20 11:46:27.664: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 20 11:46:27.689: INFO: created pod pod-service-account-mountsa-mountspec May 20 11:46:27.689: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 20 11:46:27.750: INFO: created pod pod-service-account-nomountsa-mountspec May 20 11:46:27.750: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 20 11:46:27.756: INFO: created pod pod-service-account-defaultsa-nomountspec May 20 11:46:27.756: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 20 11:46:27.787: INFO: created pod pod-service-account-mountsa-nomountspec May 20 11:46:27.787: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 20 11:46:27.811: INFO: created pod pod-service-account-nomountsa-nomountspec May 20 11:46:27.811: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:46:27.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-lzn5v" for this suite. May 20 11:46:57.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:46:58.042: INFO: namespace: e2e-tests-svcaccounts-lzn5v, resource: bindings, ignored listing per whitelist May 20 11:46:58.049: INFO: namespace e2e-tests-svcaccounts-lzn5v deletion completed in 30.161805956s • [SLOW TEST:31.067 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:46:58.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod May 20 11:47:02.191: INFO: Pod pod-hostip-9c4a6063-9a8f-11ea-b520-0242ac110018 has hostIP: 172.17.0.3 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:47:02.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-7kr7s" for this suite. May 20 11:47:24.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:47:24.277: INFO: namespace: e2e-tests-pods-7kr7s, resource: bindings, ignored listing per whitelist May 20 11:47:24.309: INFO: namespace e2e-tests-pods-7kr7s deletion completed in 22.114900257s • [SLOW TEST:26.260 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:47:24.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-hv628 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet May 20 11:47:24.426: INFO: Found 0 stateful pods, waiting for 3 May 20 11:47:34.444: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 20 11:47:34.444: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 20 11:47:34.444: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 20 11:47:44.431: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 20 11:47:44.431: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 20 11:47:44.431: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 20 11:47:44.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hv628 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 20 11:47:44.742: INFO: stderr: "I0520 11:47:44.579857 1612 log.go:172] (0xc0003fa4d0) (0xc0007b6640) Create stream\nI0520 11:47:44.579941 1612 log.go:172] (0xc0003fa4d0) (0xc0007b6640) Stream added, broadcasting: 1\nI0520 11:47:44.582477 1612 log.go:172] (0xc0003fa4d0) Reply frame received for 1\nI0520 11:47:44.582532 1612 log.go:172] (0xc0003fa4d0) (0xc00066ad20) Create stream\nI0520 11:47:44.582546 1612 log.go:172] (0xc0003fa4d0) (0xc00066ad20) Stream added, broadcasting: 3\nI0520 11:47:44.583660 1612 log.go:172] (0xc0003fa4d0) Reply frame received for 3\nI0520 11:47:44.583711 1612 log.go:172] (0xc0003fa4d0) (0xc0005ec000) Create stream\nI0520 11:47:44.583730 1612 log.go:172] (0xc0003fa4d0) (0xc0005ec000) Stream added, broadcasting: 5\nI0520 11:47:44.584671 1612 log.go:172] (0xc0003fa4d0) Reply frame received for 5\nI0520 11:47:44.734626 1612 log.go:172] (0xc0003fa4d0) Data frame received for 5\nI0520 11:47:44.734702 1612 log.go:172] (0xc0003fa4d0) Data frame received for 3\nI0520 11:47:44.734743 1612 log.go:172] (0xc00066ad20) (3) Data frame handling\nI0520 11:47:44.734763 1612 log.go:172] (0xc00066ad20) (3) Data frame sent\nI0520 11:47:44.734779 1612 log.go:172] (0xc0005ec000) (5) Data frame handling\nI0520 11:47:44.735190 1612 log.go:172] (0xc0003fa4d0) Data frame received for 3\nI0520 11:47:44.735208 1612 log.go:172] (0xc00066ad20) (3) Data frame handling\nI0520 11:47:44.736594 1612 log.go:172] (0xc0003fa4d0) Data frame received for 1\nI0520 11:47:44.736706 1612 log.go:172] (0xc0007b6640) (1) Data frame handling\nI0520 11:47:44.736744 1612 log.go:172] (0xc0007b6640) (1) Data frame sent\nI0520 11:47:44.736768 1612 log.go:172] (0xc0003fa4d0) (0xc0007b6640) Stream removed, broadcasting: 1\nI0520 11:47:44.736803 1612 log.go:172] (0xc0003fa4d0) Go away received\nI0520 11:47:44.737055 1612 log.go:172] (0xc0003fa4d0) (0xc0007b6640) Stream removed, broadcasting: 1\nI0520 11:47:44.737081 1612 log.go:172] (0xc0003fa4d0) (0xc00066ad20) Stream removed, broadcasting: 3\nI0520 11:47:44.737098 1612 log.go:172] (0xc0003fa4d0) (0xc0005ec000) Stream removed, broadcasting: 5\n" May 20 11:47:44.742: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 20 11:47:44.742: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 20 11:47:54.779: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 20 11:48:04.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hv628 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 11:48:05.001: INFO: stderr: "I0520 11:48:04.936721 1634 log.go:172] (0xc00015c840) (0xc00073e640) Create stream\nI0520 11:48:04.936782 1634 log.go:172] (0xc00015c840) (0xc00073e640) Stream added, broadcasting: 1\nI0520 11:48:04.939254 1634 log.go:172] (0xc00015c840) Reply frame received for 1\nI0520 11:48:04.939310 1634 log.go:172] (0xc00015c840) (0xc0005f6dc0) Create stream\nI0520 11:48:04.939340 1634 log.go:172] (0xc00015c840) (0xc0005f6dc0) Stream added, broadcasting: 3\nI0520 11:48:04.940317 1634 log.go:172] (0xc00015c840) Reply frame received for 3\nI0520 11:48:04.940349 1634 log.go:172] (0xc00015c840) (0xc0005f6f00) Create stream\nI0520 11:48:04.940359 1634 log.go:172] (0xc00015c840) (0xc0005f6f00) Stream added, broadcasting: 5\nI0520 11:48:04.941495 1634 log.go:172] (0xc00015c840) Reply frame received for 5\nI0520 11:48:04.995356 1634 log.go:172] (0xc00015c840) Data frame received for 5\nI0520 11:48:04.995396 1634 log.go:172] (0xc0005f6f00) (5) Data frame handling\nI0520 11:48:04.995440 1634 log.go:172] (0xc00015c840) Data frame received for 3\nI0520 11:48:04.995476 1634 log.go:172] (0xc0005f6dc0) (3) Data frame handling\nI0520 11:48:04.995494 1634 log.go:172] (0xc0005f6dc0) (3) Data frame sent\nI0520 11:48:04.995504 1634 log.go:172] (0xc00015c840) Data frame received for 3\nI0520 11:48:04.995509 1634 log.go:172] (0xc0005f6dc0) (3) Data frame handling\nI0520 11:48:04.997036 1634 log.go:172] (0xc00015c840) Data frame received for 1\nI0520 11:48:04.997071 1634 log.go:172] (0xc00073e640) (1) Data frame handling\nI0520 11:48:04.997090 1634 log.go:172] (0xc00073e640) (1) Data frame sent\nI0520 11:48:04.997353 1634 log.go:172] (0xc00015c840) (0xc00073e640) Stream removed, broadcasting: 1\nI0520 11:48:04.997377 1634 log.go:172] (0xc00015c840) Go away received\nI0520 11:48:04.997719 1634 log.go:172] (0xc00015c840) (0xc00073e640) Stream removed, broadcasting: 1\nI0520 11:48:04.997738 1634 log.go:172] (0xc00015c840) (0xc0005f6dc0) Stream removed, broadcasting: 3\nI0520 11:48:04.997749 1634 log.go:172] (0xc00015c840) (0xc0005f6f00) Stream removed, broadcasting: 5\n" May 20 11:48:05.001: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 20 11:48:05.001: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 20 11:48:15.036: INFO: Waiting for StatefulSet e2e-tests-statefulset-hv628/ss2 to complete update May 20 11:48:15.036: INFO: Waiting for Pod e2e-tests-statefulset-hv628/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 20 11:48:15.036: INFO: Waiting for Pod e2e-tests-statefulset-hv628/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 20 11:48:25.041: INFO: Waiting for StatefulSet e2e-tests-statefulset-hv628/ss2 to complete update May 20 11:48:25.041: INFO: Waiting for Pod e2e-tests-statefulset-hv628/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision May 20 11:48:35.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hv628 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 20 11:48:35.357: INFO: stderr: "I0520 11:48:35.193719 1657 log.go:172] (0xc0008322c0) (0xc000720640) Create stream\nI0520 11:48:35.193775 1657 log.go:172] (0xc0008322c0) (0xc000720640) Stream added, broadcasting: 1\nI0520 11:48:35.204027 1657 log.go:172] (0xc0008322c0) Reply frame received for 1\nI0520 11:48:35.204219 1657 log.go:172] (0xc0008322c0) (0xc00065cc80) Create stream\nI0520 11:48:35.204283 1657 log.go:172] (0xc0008322c0) (0xc00065cc80) Stream added, broadcasting: 3\nI0520 11:48:35.205574 1657 log.go:172] (0xc0008322c0) Reply frame received for 3\nI0520 11:48:35.205606 1657 log.go:172] (0xc0008322c0) (0xc00065cdc0) Create stream\nI0520 11:48:35.205618 1657 log.go:172] (0xc0008322c0) (0xc00065cdc0) Stream added, broadcasting: 5\nI0520 11:48:35.206954 1657 log.go:172] (0xc0008322c0) Reply frame received for 5\nI0520 11:48:35.350297 1657 log.go:172] (0xc0008322c0) Data frame received for 5\nI0520 11:48:35.350327 1657 log.go:172] (0xc00065cdc0) (5) Data frame handling\nI0520 11:48:35.350363 1657 log.go:172] (0xc0008322c0) Data frame received for 3\nI0520 11:48:35.350373 1657 log.go:172] (0xc00065cc80) (3) Data frame handling\nI0520 11:48:35.350384 1657 log.go:172] (0xc00065cc80) (3) Data frame sent\nI0520 11:48:35.350396 1657 log.go:172] (0xc0008322c0) Data frame received for 3\nI0520 11:48:35.350408 1657 log.go:172] (0xc00065cc80) (3) Data frame handling\nI0520 11:48:35.351807 1657 log.go:172] (0xc0008322c0) Data frame received for 1\nI0520 11:48:35.351830 1657 log.go:172] (0xc000720640) (1) Data frame handling\nI0520 11:48:35.351859 1657 log.go:172] (0xc000720640) (1) Data frame sent\nI0520 11:48:35.351883 1657 log.go:172] (0xc0008322c0) (0xc000720640) Stream removed, broadcasting: 1\nI0520 11:48:35.351906 1657 log.go:172] (0xc0008322c0) Go away received\nI0520 11:48:35.352070 1657 log.go:172] (0xc0008322c0) (0xc000720640) Stream removed, broadcasting: 1\nI0520 11:48:35.352087 1657 log.go:172] (0xc0008322c0) (0xc00065cc80) Stream removed, broadcasting: 3\nI0520 11:48:35.352096 1657 log.go:172] (0xc0008322c0) (0xc00065cdc0) Stream removed, broadcasting: 5\n" May 20 11:48:35.357: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 20 11:48:35.357: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 20 11:48:45.388: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 20 11:48:55.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hv628 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 11:48:55.612: INFO: stderr: "I0520 11:48:55.532042 1679 log.go:172] (0xc000138840) (0xc000738640) Create stream\nI0520 11:48:55.532128 1679 log.go:172] (0xc000138840) (0xc000738640) Stream added, broadcasting: 1\nI0520 11:48:55.534973 1679 log.go:172] (0xc000138840) Reply frame received for 1\nI0520 11:48:55.535017 1679 log.go:172] (0xc000138840) (0xc000692dc0) Create stream\nI0520 11:48:55.535033 1679 log.go:172] (0xc000138840) (0xc000692dc0) Stream added, broadcasting: 3\nI0520 11:48:55.536103 1679 log.go:172] (0xc000138840) Reply frame received for 3\nI0520 11:48:55.536150 1679 log.go:172] (0xc000138840) (0xc0007386e0) Create stream\nI0520 11:48:55.536167 1679 log.go:172] (0xc000138840) (0xc0007386e0) Stream added, broadcasting: 5\nI0520 11:48:55.537691 1679 log.go:172] (0xc000138840) Reply frame received for 5\nI0520 11:48:55.606694 1679 log.go:172] (0xc000138840) Data frame received for 5\nI0520 11:48:55.606717 1679 log.go:172] (0xc0007386e0) (5) Data frame handling\nI0520 11:48:55.606737 1679 log.go:172] (0xc000138840) Data frame received for 3\nI0520 11:48:55.606758 1679 log.go:172] (0xc000692dc0) (3) Data frame handling\nI0520 11:48:55.606782 1679 log.go:172] (0xc000692dc0) (3) Data frame sent\nI0520 11:48:55.606791 1679 log.go:172] (0xc000138840) Data frame received for 3\nI0520 11:48:55.606798 1679 log.go:172] (0xc000692dc0) (3) Data frame handling\nI0520 11:48:55.608336 1679 log.go:172] (0xc000138840) Data frame received for 1\nI0520 11:48:55.608369 1679 log.go:172] (0xc000738640) (1) Data frame handling\nI0520 11:48:55.608401 1679 log.go:172] (0xc000738640) (1) Data frame sent\nI0520 11:48:55.608440 1679 log.go:172] (0xc000138840) (0xc000738640) Stream removed, broadcasting: 1\nI0520 11:48:55.608527 1679 log.go:172] (0xc000138840) Go away received\nI0520 11:48:55.608656 1679 log.go:172] (0xc000138840) (0xc000738640) Stream removed, broadcasting: 1\nI0520 11:48:55.608728 1679 log.go:172] (0xc000138840) (0xc000692dc0) Stream removed, broadcasting: 3\nI0520 11:48:55.608814 1679 log.go:172] (0xc000138840) (0xc0007386e0) Stream removed, broadcasting: 5\n" May 20 11:48:55.612: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 20 11:48:55.612: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 20 11:49:15.632: INFO: Waiting for StatefulSet e2e-tests-statefulset-hv628/ss2 to complete update May 20 11:49:15.632: INFO: Waiting for Pod e2e-tests-statefulset-hv628/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 20 11:49:25.640: INFO: Deleting all statefulset in ns e2e-tests-statefulset-hv628 May 20 11:49:25.643: INFO: Scaling statefulset ss2 to 0 May 20 11:49:55.661: INFO: Waiting for statefulset status.replicas updated to 0 May 20 11:49:55.668: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:49:55.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-hv628" for this suite. May 20 11:50:03.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:50:03.752: INFO: namespace: e2e-tests-statefulset-hv628, resource: bindings, ignored listing per whitelist May 20 11:50:03.838: INFO: namespace e2e-tests-statefulset-hv628 deletion completed in 8.150562745s • [SLOW TEST:159.528 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:50:03.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 20 11:50:18.131: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-px4ln PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 11:50:18.132: INFO: >>> kubeConfig: /root/.kube/config I0520 11:50:18.158954 7 log.go:172] (0xc0009984d0) (0xc0012f3220) Create stream I0520 11:50:18.158988 7 log.go:172] (0xc0009984d0) (0xc0012f3220) Stream added, broadcasting: 1 I0520 11:50:18.160827 7 log.go:172] (0xc0009984d0) Reply frame received for 1 I0520 11:50:18.160872 7 log.go:172] (0xc0009984d0) (0xc0024adf40) Create stream I0520 11:50:18.160885 7 log.go:172] (0xc0009984d0) (0xc0024adf40) Stream added, broadcasting: 3 I0520 11:50:18.162133 7 log.go:172] (0xc0009984d0) Reply frame received for 3 I0520 11:50:18.162165 7 log.go:172] (0xc0009984d0) (0xc001911360) Create stream I0520 11:50:18.162181 7 log.go:172] (0xc0009984d0) (0xc001911360) Stream added, broadcasting: 5 I0520 11:50:18.163056 7 log.go:172] (0xc0009984d0) Reply frame received for 5 I0520 11:50:18.230860 7 log.go:172] (0xc0009984d0) Data frame received for 5 I0520 11:50:18.230930 7 log.go:172] (0xc001911360) (5) Data frame handling I0520 11:50:18.230981 7 log.go:172] (0xc0009984d0) Data frame received for 3 I0520 11:50:18.231057 7 log.go:172] (0xc0024adf40) (3) Data frame handling I0520 11:50:18.231095 7 log.go:172] (0xc0024adf40) (3) Data frame sent I0520 11:50:18.231111 7 log.go:172] (0xc0009984d0) Data frame received for 3 I0520 11:50:18.231124 7 log.go:172] (0xc0024adf40) (3) Data frame handling I0520 11:50:18.232486 7 log.go:172] (0xc0009984d0) Data frame received for 1 I0520 11:50:18.232506 7 log.go:172] (0xc0012f3220) (1) Data frame handling I0520 11:50:18.232521 7 log.go:172] (0xc0012f3220) (1) Data frame sent I0520 11:50:18.232529 7 log.go:172] (0xc0009984d0) (0xc0012f3220) Stream removed, broadcasting: 1 I0520 11:50:18.232611 7 log.go:172] (0xc0009984d0) (0xc0012f3220) Stream removed, broadcasting: 1 I0520 11:50:18.232626 7 log.go:172] (0xc0009984d0) (0xc0024adf40) Stream removed, broadcasting: 3 I0520 11:50:18.232633 7 log.go:172] (0xc0009984d0) (0xc001911360) Stream removed, broadcasting: 5 May 20 11:50:18.232: INFO: Exec stderr: "" May 20 11:50:18.232: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-px4ln PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 11:50:18.232: INFO: >>> kubeConfig: /root/.kube/config I0520 11:50:18.232743 7 log.go:172] (0xc0009984d0) Go away received I0520 11:50:18.262884 7 log.go:172] (0xc000825d90) (0xc0019115e0) Create stream I0520 11:50:18.262933 7 log.go:172] (0xc000825d90) (0xc0019115e0) Stream added, broadcasting: 1 I0520 11:50:18.265051 7 log.go:172] (0xc000825d90) Reply frame received for 1 I0520 11:50:18.265104 7 log.go:172] (0xc000825d90) (0xc001cce000) Create stream I0520 11:50:18.265308 7 log.go:172] (0xc000825d90) (0xc001cce000) Stream added, broadcasting: 3 I0520 11:50:18.266305 7 log.go:172] (0xc000825d90) Reply frame received for 3 I0520 11:50:18.266348 7 log.go:172] (0xc000825d90) (0xc0020bc780) Create stream I0520 11:50:18.266359 7 log.go:172] (0xc000825d90) (0xc0020bc780) Stream added, broadcasting: 5 I0520 11:50:18.267290 7 log.go:172] (0xc000825d90) Reply frame received for 5 I0520 11:50:18.318368 7 log.go:172] (0xc000825d90) Data frame received for 5 I0520 11:50:18.318405 7 log.go:172] (0xc000825d90) Data frame received for 3 I0520 11:50:18.318447 7 log.go:172] (0xc001cce000) (3) Data frame handling I0520 11:50:18.318477 7 log.go:172] (0xc001cce000) (3) Data frame sent I0520 11:50:18.318495 7 log.go:172] (0xc000825d90) Data frame received for 3 I0520 11:50:18.318507 7 log.go:172] (0xc001cce000) (3) Data frame handling I0520 11:50:18.318531 7 log.go:172] (0xc0020bc780) (5) Data frame handling I0520 11:50:18.319833 7 log.go:172] (0xc000825d90) Data frame received for 1 I0520 11:50:18.319864 7 log.go:172] (0xc0019115e0) (1) Data frame handling I0520 11:50:18.319909 7 log.go:172] (0xc0019115e0) (1) Data frame sent I0520 11:50:18.319930 7 log.go:172] (0xc000825d90) (0xc0019115e0) Stream removed, broadcasting: 1 I0520 11:50:18.319950 7 log.go:172] (0xc000825d90) Go away received I0520 11:50:18.320117 7 log.go:172] (0xc000825d90) (0xc0019115e0) Stream removed, broadcasting: 1 I0520 11:50:18.320173 7 log.go:172] (0xc000825d90) (0xc001cce000) Stream removed, broadcasting: 3 I0520 11:50:18.320197 7 log.go:172] (0xc000825d90) (0xc0020bc780) Stream removed, broadcasting: 5 May 20 11:50:18.320: INFO: Exec stderr: "" May 20 11:50:18.320: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-px4ln PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 11:50:18.320: INFO: >>> kubeConfig: /root/.kube/config I0520 11:50:18.348023 7 log.go:172] (0xc0009989a0) (0xc0012f3680) Create stream I0520 11:50:18.348050 7 log.go:172] (0xc0009989a0) (0xc0012f3680) Stream added, broadcasting: 1 I0520 11:50:18.350802 7 log.go:172] (0xc0009989a0) Reply frame received for 1 I0520 11:50:18.350842 7 log.go:172] (0xc0009989a0) (0xc001cce0a0) Create stream I0520 11:50:18.350862 7 log.go:172] (0xc0009989a0) (0xc001cce0a0) Stream added, broadcasting: 3 I0520 11:50:18.351880 7 log.go:172] (0xc0009989a0) Reply frame received for 3 I0520 11:50:18.351929 7 log.go:172] (0xc0009989a0) (0xc000b71900) Create stream I0520 11:50:18.351953 7 log.go:172] (0xc0009989a0) (0xc000b71900) Stream added, broadcasting: 5 I0520 11:50:18.352912 7 log.go:172] (0xc0009989a0) Reply frame received for 5 I0520 11:50:18.415827 7 log.go:172] (0xc0009989a0) Data frame received for 5 I0520 11:50:18.415869 7 log.go:172] (0xc000b71900) (5) Data frame handling I0520 11:50:18.415920 7 log.go:172] (0xc0009989a0) Data frame received for 3 I0520 11:50:18.415938 7 log.go:172] (0xc001cce0a0) (3) Data frame handling I0520 11:50:18.415957 7 log.go:172] (0xc001cce0a0) (3) Data frame sent I0520 11:50:18.415974 7 log.go:172] (0xc0009989a0) Data frame received for 3 I0520 11:50:18.415991 7 log.go:172] (0xc001cce0a0) (3) Data frame handling I0520 11:50:18.418113 7 log.go:172] (0xc0009989a0) Data frame received for 1 I0520 11:50:18.418143 7 log.go:172] (0xc0012f3680) (1) Data frame handling I0520 11:50:18.418158 7 log.go:172] (0xc0012f3680) (1) Data frame sent I0520 11:50:18.418174 7 log.go:172] (0xc0009989a0) (0xc0012f3680) Stream removed, broadcasting: 1 I0520 11:50:18.418199 7 log.go:172] (0xc0009989a0) Go away received I0520 11:50:18.418430 7 log.go:172] (0xc0009989a0) (0xc0012f3680) Stream removed, broadcasting: 1 I0520 11:50:18.418447 7 log.go:172] (0xc0009989a0) (0xc001cce0a0) Stream removed, broadcasting: 3 I0520 11:50:18.418460 7 log.go:172] (0xc0009989a0) (0xc000b71900) Stream removed, broadcasting: 5 May 20 11:50:18.418: INFO: Exec stderr: "" May 20 11:50:18.418: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-px4ln PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 11:50:18.418: INFO: >>> kubeConfig: /root/.kube/config I0520 11:50:18.444433 7 log.go:172] (0xc000998e70) (0xc0012f3900) Create stream I0520 11:50:18.444461 7 log.go:172] (0xc000998e70) (0xc0012f3900) Stream added, broadcasting: 1 I0520 11:50:18.446340 7 log.go:172] (0xc000998e70) Reply frame received for 1 I0520 11:50:18.446382 7 log.go:172] (0xc000998e70) (0xc001cce140) Create stream I0520 11:50:18.446395 7 log.go:172] (0xc000998e70) (0xc001cce140) Stream added, broadcasting: 3 I0520 11:50:18.447209 7 log.go:172] (0xc000998e70) Reply frame received for 3 I0520 11:50:18.447271 7 log.go:172] (0xc000998e70) (0xc001911680) Create stream I0520 11:50:18.447300 7 log.go:172] (0xc000998e70) (0xc001911680) Stream added, broadcasting: 5 I0520 11:50:18.448431 7 log.go:172] (0xc000998e70) Reply frame received for 5 I0520 11:50:18.521305 7 log.go:172] (0xc000998e70) Data frame received for 5 I0520 11:50:18.521351 7 log.go:172] (0xc001911680) (5) Data frame handling I0520 11:50:18.521375 7 log.go:172] (0xc000998e70) Data frame received for 3 I0520 11:50:18.521391 7 log.go:172] (0xc001cce140) (3) Data frame handling I0520 11:50:18.521408 7 log.go:172] (0xc001cce140) (3) Data frame sent I0520 11:50:18.521418 7 log.go:172] (0xc000998e70) Data frame received for 3 I0520 11:50:18.521424 7 log.go:172] (0xc001cce140) (3) Data frame handling I0520 11:50:18.523492 7 log.go:172] (0xc000998e70) Data frame received for 1 I0520 11:50:18.523516 7 log.go:172] (0xc0012f3900) (1) Data frame handling I0520 11:50:18.523538 7 log.go:172] (0xc0012f3900) (1) Data frame sent I0520 11:50:18.523555 7 log.go:172] (0xc000998e70) (0xc0012f3900) Stream removed, broadcasting: 1 I0520 11:50:18.523568 7 log.go:172] (0xc000998e70) Go away received I0520 11:50:18.523753 7 log.go:172] (0xc000998e70) (0xc0012f3900) Stream removed, broadcasting: 1 I0520 11:50:18.523782 7 log.go:172] (0xc000998e70) (0xc001cce140) Stream removed, broadcasting: 3 I0520 11:50:18.523797 7 log.go:172] (0xc000998e70) (0xc001911680) Stream removed, broadcasting: 5 May 20 11:50:18.523: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 20 11:50:18.523: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-px4ln PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 11:50:18.523: INFO: >>> kubeConfig: /root/.kube/config I0520 11:50:18.552445 7 log.go:172] (0xc0008db8c0) (0xc001911900) Create stream I0520 11:50:18.552475 7 log.go:172] (0xc0008db8c0) (0xc001911900) Stream added, broadcasting: 1 I0520 11:50:18.554669 7 log.go:172] (0xc0008db8c0) Reply frame received for 1 I0520 11:50:18.554727 7 log.go:172] (0xc0008db8c0) (0xc0020bc820) Create stream I0520 11:50:18.554742 7 log.go:172] (0xc0008db8c0) (0xc0020bc820) Stream added, broadcasting: 3 I0520 11:50:18.555767 7 log.go:172] (0xc0008db8c0) Reply frame received for 3 I0520 11:50:18.555800 7 log.go:172] (0xc0008db8c0) (0xc0020bc8c0) Create stream I0520 11:50:18.555813 7 log.go:172] (0xc0008db8c0) (0xc0020bc8c0) Stream added, broadcasting: 5 I0520 11:50:18.556729 7 log.go:172] (0xc0008db8c0) Reply frame received for 5 I0520 11:50:18.608606 7 log.go:172] (0xc0008db8c0) Data frame received for 3 I0520 11:50:18.608672 7 log.go:172] (0xc0020bc820) (3) Data frame handling I0520 11:50:18.608706 7 log.go:172] (0xc0020bc820) (3) Data frame sent I0520 11:50:18.608728 7 log.go:172] (0xc0008db8c0) Data frame received for 3 I0520 11:50:18.608740 7 log.go:172] (0xc0020bc820) (3) Data frame handling I0520 11:50:18.608761 7 log.go:172] (0xc0008db8c0) Data frame received for 5 I0520 11:50:18.608784 7 log.go:172] (0xc0020bc8c0) (5) Data frame handling I0520 11:50:18.610253 7 log.go:172] (0xc0008db8c0) Data frame received for 1 I0520 11:50:18.610276 7 log.go:172] (0xc001911900) (1) Data frame handling I0520 11:50:18.610285 7 log.go:172] (0xc001911900) (1) Data frame sent I0520 11:50:18.610296 7 log.go:172] (0xc0008db8c0) (0xc001911900) Stream removed, broadcasting: 1 I0520 11:50:18.610325 7 log.go:172] (0xc0008db8c0) Go away received I0520 11:50:18.610405 7 log.go:172] (0xc0008db8c0) (0xc001911900) Stream removed, broadcasting: 1 I0520 11:50:18.610423 7 log.go:172] (0xc0008db8c0) (0xc0020bc820) Stream removed, broadcasting: 3 I0520 11:50:18.610434 7 log.go:172] (0xc0008db8c0) (0xc0020bc8c0) Stream removed, broadcasting: 5 May 20 11:50:18.610: INFO: Exec stderr: "" May 20 11:50:18.610: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-px4ln PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 11:50:18.610: INFO: >>> kubeConfig: /root/.kube/config I0520 11:50:18.640338 7 log.go:172] (0xc0020ba2c0) (0xc0020bcb40) Create stream I0520 11:50:18.640417 7 log.go:172] (0xc0020ba2c0) (0xc0020bcb40) Stream added, broadcasting: 1 I0520 11:50:18.642771 7 log.go:172] (0xc0020ba2c0) Reply frame received for 1 I0520 11:50:18.642811 7 log.go:172] (0xc0020ba2c0) (0xc0019119a0) Create stream I0520 11:50:18.642828 7 log.go:172] (0xc0020ba2c0) (0xc0019119a0) Stream added, broadcasting: 3 I0520 11:50:18.643771 7 log.go:172] (0xc0020ba2c0) Reply frame received for 3 I0520 11:50:18.643810 7 log.go:172] (0xc0020ba2c0) (0xc0012f3a40) Create stream I0520 11:50:18.643822 7 log.go:172] (0xc0020ba2c0) (0xc0012f3a40) Stream added, broadcasting: 5 I0520 11:50:18.644535 7 log.go:172] (0xc0020ba2c0) Reply frame received for 5 I0520 11:50:18.696315 7 log.go:172] (0xc0020ba2c0) Data frame received for 5 I0520 11:50:18.696344 7 log.go:172] (0xc0012f3a40) (5) Data frame handling I0520 11:50:18.696374 7 log.go:172] (0xc0020ba2c0) Data frame received for 3 I0520 11:50:18.696388 7 log.go:172] (0xc0019119a0) (3) Data frame handling I0520 11:50:18.696401 7 log.go:172] (0xc0019119a0) (3) Data frame sent I0520 11:50:18.696413 7 log.go:172] (0xc0020ba2c0) Data frame received for 3 I0520 11:50:18.696417 7 log.go:172] (0xc0019119a0) (3) Data frame handling I0520 11:50:18.701851 7 log.go:172] (0xc0020ba2c0) Data frame received for 1 I0520 11:50:18.701888 7 log.go:172] (0xc0020bcb40) (1) Data frame handling I0520 11:50:18.701918 7 log.go:172] (0xc0020bcb40) (1) Data frame sent I0520 11:50:18.701940 7 log.go:172] (0xc0020ba2c0) (0xc0020bcb40) Stream removed, broadcasting: 1 I0520 11:50:18.701968 7 log.go:172] (0xc0020ba2c0) Go away received I0520 11:50:18.702119 7 log.go:172] (0xc0020ba2c0) (0xc0020bcb40) Stream removed, broadcasting: 1 I0520 11:50:18.702143 7 log.go:172] (0xc0020ba2c0) (0xc0019119a0) Stream removed, broadcasting: 3 I0520 11:50:18.702156 7 log.go:172] (0xc0020ba2c0) (0xc0012f3a40) Stream removed, broadcasting: 5 May 20 11:50:18.702: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 20 11:50:18.702: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-px4ln PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 11:50:18.702: INFO: >>> kubeConfig: /root/.kube/config I0520 11:50:18.724799 7 log.go:172] (0xc000999340) (0xc0012f3d60) Create stream I0520 11:50:18.724825 7 log.go:172] (0xc000999340) (0xc0012f3d60) Stream added, broadcasting: 1 I0520 11:50:18.726843 7 log.go:172] (0xc000999340) Reply frame received for 1 I0520 11:50:18.726871 7 log.go:172] (0xc000999340) (0xc001cce280) Create stream I0520 11:50:18.726881 7 log.go:172] (0xc000999340) (0xc001cce280) Stream added, broadcasting: 3 I0520 11:50:18.727479 7 log.go:172] (0xc000999340) Reply frame received for 3 I0520 11:50:18.727502 7 log.go:172] (0xc000999340) (0xc001cce320) Create stream I0520 11:50:18.727512 7 log.go:172] (0xc000999340) (0xc001cce320) Stream added, broadcasting: 5 I0520 11:50:18.728177 7 log.go:172] (0xc000999340) Reply frame received for 5 I0520 11:50:18.782252 7 log.go:172] (0xc000999340) Data frame received for 5 I0520 11:50:18.782290 7 log.go:172] (0xc001cce320) (5) Data frame handling I0520 11:50:18.782313 7 log.go:172] (0xc000999340) Data frame received for 3 I0520 11:50:18.782324 7 log.go:172] (0xc001cce280) (3) Data frame handling I0520 11:50:18.782337 7 log.go:172] (0xc001cce280) (3) Data frame sent I0520 11:50:18.782353 7 log.go:172] (0xc000999340) Data frame received for 3 I0520 11:50:18.782364 7 log.go:172] (0xc001cce280) (3) Data frame handling I0520 11:50:18.783777 7 log.go:172] (0xc000999340) Data frame received for 1 I0520 11:50:18.783815 7 log.go:172] (0xc0012f3d60) (1) Data frame handling I0520 11:50:18.783833 7 log.go:172] (0xc0012f3d60) (1) Data frame sent I0520 11:50:18.783844 7 log.go:172] (0xc000999340) (0xc0012f3d60) Stream removed, broadcasting: 1 I0520 11:50:18.783915 7 log.go:172] (0xc000999340) (0xc0012f3d60) Stream removed, broadcasting: 1 I0520 11:50:18.783934 7 log.go:172] (0xc000999340) (0xc001cce280) Stream removed, broadcasting: 3 I0520 11:50:18.783997 7 log.go:172] (0xc000999340) Go away received I0520 11:50:18.784117 7 log.go:172] (0xc000999340) (0xc001cce320) Stream removed, broadcasting: 5 May 20 11:50:18.784: INFO: Exec stderr: "" May 20 11:50:18.784: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-px4ln PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 11:50:18.784: INFO: >>> kubeConfig: /root/.kube/config I0520 11:50:18.815819 7 log.go:172] (0xc0020f82c0) (0xc000b71f40) Create stream I0520 11:50:18.815857 7 log.go:172] (0xc0020f82c0) (0xc000b71f40) Stream added, broadcasting: 1 I0520 11:50:18.817586 7 log.go:172] (0xc0020f82c0) Reply frame received for 1 I0520 11:50:18.817638 7 log.go:172] (0xc0020f82c0) (0xc0012f3e00) Create stream I0520 11:50:18.817657 7 log.go:172] (0xc0020f82c0) (0xc0012f3e00) Stream added, broadcasting: 3 I0520 11:50:18.818504 7 log.go:172] (0xc0020f82c0) Reply frame received for 3 I0520 11:50:18.818538 7 log.go:172] (0xc0020f82c0) (0xc0020bcbe0) Create stream I0520 11:50:18.818550 7 log.go:172] (0xc0020f82c0) (0xc0020bcbe0) Stream added, broadcasting: 5 I0520 11:50:18.819522 7 log.go:172] (0xc0020f82c0) Reply frame received for 5 I0520 11:50:18.886784 7 log.go:172] (0xc0020f82c0) Data frame received for 5 I0520 11:50:18.886831 7 log.go:172] (0xc0020bcbe0) (5) Data frame handling I0520 11:50:18.886859 7 log.go:172] (0xc0020f82c0) Data frame received for 3 I0520 11:50:18.886874 7 log.go:172] (0xc0012f3e00) (3) Data frame handling I0520 11:50:18.886889 7 log.go:172] (0xc0012f3e00) (3) Data frame sent I0520 11:50:18.886923 7 log.go:172] (0xc0020f82c0) Data frame received for 3 I0520 11:50:18.886943 7 log.go:172] (0xc0012f3e00) (3) Data frame handling I0520 11:50:18.888500 7 log.go:172] (0xc0020f82c0) Data frame received for 1 I0520 11:50:18.888524 7 log.go:172] (0xc000b71f40) (1) Data frame handling I0520 11:50:18.888538 7 log.go:172] (0xc000b71f40) (1) Data frame sent I0520 11:50:18.888557 7 log.go:172] (0xc0020f82c0) (0xc000b71f40) Stream removed, broadcasting: 1 I0520 11:50:18.888570 7 log.go:172] (0xc0020f82c0) Go away received I0520 11:50:18.888739 7 log.go:172] (0xc0020f82c0) (0xc000b71f40) Stream removed, broadcasting: 1 I0520 11:50:18.888766 7 log.go:172] (0xc0020f82c0) (0xc0012f3e00) Stream removed, broadcasting: 3 I0520 11:50:18.888779 7 log.go:172] (0xc0020f82c0) (0xc0020bcbe0) Stream removed, broadcasting: 5 May 20 11:50:18.888: INFO: Exec stderr: "" May 20 11:50:18.888: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-px4ln PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 11:50:18.888: INFO: >>> kubeConfig: /root/.kube/config I0520 11:50:18.917641 7 log.go:172] (0xc0020f8790) (0xc00177c280) Create stream I0520 11:50:18.917670 7 log.go:172] (0xc0020f8790) (0xc00177c280) Stream added, broadcasting: 1 I0520 11:50:18.919669 7 log.go:172] (0xc0020f8790) Reply frame received for 1 I0520 11:50:18.919713 7 log.go:172] (0xc0020f8790) (0xc0012f3ea0) Create stream I0520 11:50:18.919726 7 log.go:172] (0xc0020f8790) (0xc0012f3ea0) Stream added, broadcasting: 3 I0520 11:50:18.920436 7 log.go:172] (0xc0020f8790) Reply frame received for 3 I0520 11:50:18.920499 7 log.go:172] (0xc0020f8790) (0xc0020bcc80) Create stream I0520 11:50:18.920512 7 log.go:172] (0xc0020f8790) (0xc0020bcc80) Stream added, broadcasting: 5 I0520 11:50:18.921508 7 log.go:172] (0xc0020f8790) Reply frame received for 5 I0520 11:50:18.990731 7 log.go:172] (0xc0020f8790) Data frame received for 5 I0520 11:50:18.990795 7 log.go:172] (0xc0020bcc80) (5) Data frame handling I0520 11:50:18.990841 7 log.go:172] (0xc0020f8790) Data frame received for 3 I0520 11:50:18.990863 7 log.go:172] (0xc0012f3ea0) (3) Data frame handling I0520 11:50:18.990886 7 log.go:172] (0xc0012f3ea0) (3) Data frame sent I0520 11:50:18.990925 7 log.go:172] (0xc0020f8790) Data frame received for 3 I0520 11:50:18.990944 7 log.go:172] (0xc0012f3ea0) (3) Data frame handling I0520 11:50:18.992339 7 log.go:172] (0xc0020f8790) Data frame received for 1 I0520 11:50:18.992363 7 log.go:172] (0xc00177c280) (1) Data frame handling I0520 11:50:18.992383 7 log.go:172] (0xc00177c280) (1) Data frame sent I0520 11:50:18.992403 7 log.go:172] (0xc0020f8790) (0xc00177c280) Stream removed, broadcasting: 1 I0520 11:50:18.992503 7 log.go:172] (0xc0020f8790) (0xc00177c280) Stream removed, broadcasting: 1 I0520 11:50:18.992521 7 log.go:172] (0xc0020f8790) (0xc0012f3ea0) Stream removed, broadcasting: 3 I0520 11:50:18.992695 7 log.go:172] (0xc0020f8790) (0xc0020bcc80) Stream removed, broadcasting: 5 I0520 11:50:18.992739 7 log.go:172] (0xc0020f8790) Go away received May 20 11:50:18.992: INFO: Exec stderr: "" May 20 11:50:18.992: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-px4ln PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 11:50:18.992: INFO: >>> kubeConfig: /root/.kube/config I0520 11:50:19.018743 7 log.go:172] (0xc000999810) (0xc001c821e0) Create stream I0520 11:50:19.018773 7 log.go:172] (0xc000999810) (0xc001c821e0) Stream added, broadcasting: 1 I0520 11:50:19.020396 7 log.go:172] (0xc000999810) Reply frame received for 1 I0520 11:50:19.020436 7 log.go:172] (0xc000999810) (0xc001c82280) Create stream I0520 11:50:19.020446 7 log.go:172] (0xc000999810) (0xc001c82280) Stream added, broadcasting: 3 I0520 11:50:19.021629 7 log.go:172] (0xc000999810) Reply frame received for 3 I0520 11:50:19.021649 7 log.go:172] (0xc000999810) (0xc0020bcd20) Create stream I0520 11:50:19.021660 7 log.go:172] (0xc000999810) (0xc0020bcd20) Stream added, broadcasting: 5 I0520 11:50:19.022520 7 log.go:172] (0xc000999810) Reply frame received for 5 I0520 11:50:19.086842 7 log.go:172] (0xc000999810) Data frame received for 5 I0520 11:50:19.086915 7 log.go:172] (0xc0020bcd20) (5) Data frame handling I0520 11:50:19.086961 7 log.go:172] (0xc000999810) Data frame received for 3 I0520 11:50:19.086978 7 log.go:172] (0xc001c82280) (3) Data frame handling I0520 11:50:19.087001 7 log.go:172] (0xc001c82280) (3) Data frame sent I0520 11:50:19.087015 7 log.go:172] (0xc000999810) Data frame received for 3 I0520 11:50:19.087031 7 log.go:172] (0xc001c82280) (3) Data frame handling I0520 11:50:19.088660 7 log.go:172] (0xc000999810) Data frame received for 1 I0520 11:50:19.088698 7 log.go:172] (0xc001c821e0) (1) Data frame handling I0520 11:50:19.088732 7 log.go:172] (0xc001c821e0) (1) Data frame sent I0520 11:50:19.088765 7 log.go:172] (0xc000999810) (0xc001c821e0) Stream removed, broadcasting: 1 I0520 11:50:19.088795 7 log.go:172] (0xc000999810) Go away received I0520 11:50:19.088905 7 log.go:172] (0xc000999810) (0xc001c821e0) Stream removed, broadcasting: 1 I0520 11:50:19.088934 7 log.go:172] (0xc000999810) (0xc001c82280) Stream removed, broadcasting: 3 I0520 11:50:19.088952 7 log.go:172] (0xc000999810) (0xc0020bcd20) Stream removed, broadcasting: 5 May 20 11:50:19.088: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:50:19.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-px4ln" for this suite. May 20 11:51:03.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:51:03.124: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-px4ln, resource: bindings, ignored listing per whitelist May 20 11:51:03.190: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-px4ln deletion completed in 44.096961844s • [SLOW TEST:59.352 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:51:03.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 20 11:51:10.174: INFO: 10 pods remaining May 20 11:51:10.174: INFO: 10 pods has nil DeletionTimestamp May 20 11:51:10.174: INFO: May 20 11:51:12.217: INFO: 8 pods remaining May 20 11:51:12.217: INFO: 0 pods has nil DeletionTimestamp May 20 11:51:12.217: INFO: May 20 11:51:13.533: INFO: 0 pods remaining May 20 11:51:13.533: INFO: 0 pods has nil DeletionTimestamp May 20 11:51:13.533: INFO: STEP: Gathering metrics W0520 11:51:14.335153 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 20 11:51:14.335: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:51:14.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-6wctw" for this suite. May 20 11:51:22.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:51:22.440: INFO: namespace: e2e-tests-gc-6wctw, resource: bindings, ignored listing per whitelist May 20 11:51:22.463: INFO: namespace e2e-tests-gc-6wctw deletion completed in 8.123818636s • [SLOW TEST:19.273 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:51:22.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-39f984e0-9a90-11ea-b520-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-39f9853e-9a90-11ea-b520-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-39f984e0-9a90-11ea-b520-0242ac110018 STEP: Updating configmap cm-test-opt-upd-39f9853e-9a90-11ea-b520-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-39f98563-9a90-11ea-b520-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:51:31.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-j7kdp" for this suite. May 20 11:51:55.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:51:55.374: INFO: namespace: e2e-tests-configmap-j7kdp, resource: bindings, ignored listing per whitelist May 20 11:51:55.423: INFO: namespace e2e-tests-configmap-j7kdp deletion completed in 24.082638161s • [SLOW TEST:32.960 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:51:55.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 20 11:51:55.594: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 20 11:51:55.617: INFO: Number of nodes with available pods: 0 May 20 11:51:55.617: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 20 11:51:55.711: INFO: Number of nodes with available pods: 0 May 20 11:51:55.711: INFO: Node hunter-worker is running more than one daemon pod May 20 11:51:56.715: INFO: Number of nodes with available pods: 0 May 20 11:51:56.715: INFO: Node hunter-worker is running more than one daemon pod May 20 11:51:57.717: INFO: Number of nodes with available pods: 0 May 20 11:51:57.717: INFO: Node hunter-worker is running more than one daemon pod May 20 11:51:58.715: INFO: Number of nodes with available pods: 0 May 20 11:51:58.715: INFO: Node hunter-worker is running more than one daemon pod May 20 11:51:59.714: INFO: Number of nodes with available pods: 0 May 20 11:51:59.714: INFO: Node hunter-worker is running more than one daemon pod May 20 11:52:00.715: INFO: Number of nodes with available pods: 1 May 20 11:52:00.715: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 20 11:52:00.756: INFO: Number of nodes with available pods: 1 May 20 11:52:00.756: INFO: Number of running nodes: 0, number of available pods: 1 May 20 11:52:01.762: INFO: Number of nodes with available pods: 0 May 20 11:52:01.762: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 20 11:52:01.771: INFO: Number of nodes with available pods: 0 May 20 11:52:01.771: INFO: Node hunter-worker is running more than one daemon pod May 20 11:52:02.910: INFO: Number of nodes with available pods: 0 May 20 11:52:02.910: INFO: Node hunter-worker is running more than one daemon pod May 20 11:52:03.796: INFO: Number of nodes with available pods: 0 May 20 11:52:03.796: INFO: Node hunter-worker is running more than one daemon pod May 20 11:52:04.777: INFO: Number of nodes with available pods: 0 May 20 11:52:04.777: INFO: Node hunter-worker is running more than one daemon pod May 20 11:52:05.776: INFO: Number of nodes with available pods: 0 May 20 11:52:05.776: INFO: Node hunter-worker is running more than one daemon pod May 20 11:52:06.775: INFO: Number of nodes with available pods: 0 May 20 11:52:06.775: INFO: Node hunter-worker is running more than one daemon pod May 20 11:52:07.774: INFO: Number of nodes with available pods: 0 May 20 11:52:07.775: INFO: Node hunter-worker is running more than one daemon pod May 20 11:52:08.775: INFO: Number of nodes with available pods: 0 May 20 11:52:08.775: INFO: Node hunter-worker is running more than one daemon pod May 20 11:52:09.774: INFO: Number of nodes with available pods: 0 May 20 11:52:09.774: INFO: Node hunter-worker is running more than one daemon pod May 20 11:52:10.776: INFO: Number of nodes with available pods: 0 May 20 11:52:10.776: INFO: Node hunter-worker is running more than one daemon pod May 20 11:52:11.776: INFO: Number of nodes with available pods: 0 May 20 11:52:11.776: INFO: Node hunter-worker is running more than one daemon pod May 20 11:52:12.922: INFO: Number of nodes with available pods: 0 May 20 11:52:12.922: INFO: Node hunter-worker is running more than one daemon pod May 20 11:52:13.775: INFO: Number of nodes with available pods: 0 May 20 11:52:13.775: INFO: Node hunter-worker is running more than one daemon pod May 20 11:52:14.775: INFO: Number of nodes with available pods: 0 May 20 11:52:14.775: INFO: Node hunter-worker is running more than one daemon pod May 20 11:52:15.785: INFO: Number of nodes with available pods: 1 May 20 11:52:15.785: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-9qr2m, will wait for the garbage collector to delete the pods May 20 11:52:15.851: INFO: Deleting DaemonSet.extensions daemon-set took: 9.197413ms May 20 11:52:15.952: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.21192ms May 20 11:52:31.355: INFO: Number of nodes with available pods: 0 May 20 11:52:31.355: INFO: Number of running nodes: 0, number of available pods: 0 May 20 11:52:31.359: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-9qr2m/daemonsets","resourceVersion":"11570712"},"items":null} May 20 11:52:31.361: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-9qr2m/pods","resourceVersion":"11570712"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:52:31.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-9qr2m" for this suite. May 20 11:52:37.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:52:37.420: INFO: namespace: e2e-tests-daemonsets-9qr2m, resource: bindings, ignored listing per whitelist May 20 11:52:37.472: INFO: namespace e2e-tests-daemonsets-9qr2m deletion completed in 6.077196335s • [SLOW TEST:42.048 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:52:37.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 20 11:52:37.586: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-m7cw9,SelfLink:/api/v1/namespaces/e2e-tests-watch-m7cw9/configmaps/e2e-watch-test-configmap-a,UID:669aa4db-9a90-11ea-99e8-0242ac110002,ResourceVersion:11570749,Generation:0,CreationTimestamp:2020-05-20 11:52:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 20 11:52:37.586: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-m7cw9,SelfLink:/api/v1/namespaces/e2e-tests-watch-m7cw9/configmaps/e2e-watch-test-configmap-a,UID:669aa4db-9a90-11ea-99e8-0242ac110002,ResourceVersion:11570749,Generation:0,CreationTimestamp:2020-05-20 11:52:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 20 11:52:47.593: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-m7cw9,SelfLink:/api/v1/namespaces/e2e-tests-watch-m7cw9/configmaps/e2e-watch-test-configmap-a,UID:669aa4db-9a90-11ea-99e8-0242ac110002,ResourceVersion:11570769,Generation:0,CreationTimestamp:2020-05-20 11:52:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 20 11:52:47.593: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-m7cw9,SelfLink:/api/v1/namespaces/e2e-tests-watch-m7cw9/configmaps/e2e-watch-test-configmap-a,UID:669aa4db-9a90-11ea-99e8-0242ac110002,ResourceVersion:11570769,Generation:0,CreationTimestamp:2020-05-20 11:52:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 20 11:52:57.602: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-m7cw9,SelfLink:/api/v1/namespaces/e2e-tests-watch-m7cw9/configmaps/e2e-watch-test-configmap-a,UID:669aa4db-9a90-11ea-99e8-0242ac110002,ResourceVersion:11570789,Generation:0,CreationTimestamp:2020-05-20 11:52:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 20 11:52:57.602: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-m7cw9,SelfLink:/api/v1/namespaces/e2e-tests-watch-m7cw9/configmaps/e2e-watch-test-configmap-a,UID:669aa4db-9a90-11ea-99e8-0242ac110002,ResourceVersion:11570789,Generation:0,CreationTimestamp:2020-05-20 11:52:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 20 11:53:07.609: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-m7cw9,SelfLink:/api/v1/namespaces/e2e-tests-watch-m7cw9/configmaps/e2e-watch-test-configmap-a,UID:669aa4db-9a90-11ea-99e8-0242ac110002,ResourceVersion:11570809,Generation:0,CreationTimestamp:2020-05-20 11:52:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 20 11:53:07.609: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-m7cw9,SelfLink:/api/v1/namespaces/e2e-tests-watch-m7cw9/configmaps/e2e-watch-test-configmap-a,UID:669aa4db-9a90-11ea-99e8-0242ac110002,ResourceVersion:11570809,Generation:0,CreationTimestamp:2020-05-20 11:52:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 20 11:53:17.617: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-m7cw9,SelfLink:/api/v1/namespaces/e2e-tests-watch-m7cw9/configmaps/e2e-watch-test-configmap-b,UID:7e7a0095-9a90-11ea-99e8-0242ac110002,ResourceVersion:11570829,Generation:0,CreationTimestamp:2020-05-20 11:53:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 20 11:53:17.618: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-m7cw9,SelfLink:/api/v1/namespaces/e2e-tests-watch-m7cw9/configmaps/e2e-watch-test-configmap-b,UID:7e7a0095-9a90-11ea-99e8-0242ac110002,ResourceVersion:11570829,Generation:0,CreationTimestamp:2020-05-20 11:53:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 20 11:53:27.624: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-m7cw9,SelfLink:/api/v1/namespaces/e2e-tests-watch-m7cw9/configmaps/e2e-watch-test-configmap-b,UID:7e7a0095-9a90-11ea-99e8-0242ac110002,ResourceVersion:11570849,Generation:0,CreationTimestamp:2020-05-20 11:53:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 20 11:53:27.624: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-m7cw9,SelfLink:/api/v1/namespaces/e2e-tests-watch-m7cw9/configmaps/e2e-watch-test-configmap-b,UID:7e7a0095-9a90-11ea-99e8-0242ac110002,ResourceVersion:11570849,Generation:0,CreationTimestamp:2020-05-20 11:53:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:53:37.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-m7cw9" for this suite. May 20 11:53:43.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:53:43.814: INFO: namespace: e2e-tests-watch-m7cw9, resource: bindings, ignored listing per whitelist May 20 11:53:43.853: INFO: namespace e2e-tests-watch-m7cw9 deletion completed in 6.223171975s • [SLOW TEST:66.380 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:53:43.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 20 11:53:44.066: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:53:45.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-qxjtr" for this suite. May 20 11:53:51.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:53:51.472: INFO: namespace: e2e-tests-custom-resource-definition-qxjtr, resource: bindings, ignored listing per whitelist May 20 11:53:51.532: INFO: namespace e2e-tests-custom-resource-definition-qxjtr deletion completed in 6.152001322s • [SLOW TEST:7.679 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:53:51.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-mxvv2/secret-test-92c731c5-9a90-11ea-b520-0242ac110018 STEP: Creating a pod to test consume secrets May 20 11:53:51.692: INFO: Waiting up to 5m0s for pod "pod-configmaps-92c91a4c-9a90-11ea-b520-0242ac110018" in namespace "e2e-tests-secrets-mxvv2" to be "success or failure" May 20 11:53:51.695: INFO: Pod "pod-configmaps-92c91a4c-9a90-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.09815ms May 20 11:53:53.792: INFO: Pod "pod-configmaps-92c91a4c-9a90-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09949942s May 20 11:53:55.840: INFO: Pod "pod-configmaps-92c91a4c-9a90-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.147450357s May 20 11:53:57.844: INFO: Pod "pod-configmaps-92c91a4c-9a90-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.152104397s STEP: Saw pod success May 20 11:53:57.844: INFO: Pod "pod-configmaps-92c91a4c-9a90-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:53:57.848: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-92c91a4c-9a90-11ea-b520-0242ac110018 container env-test: STEP: delete the pod May 20 11:53:57.888: INFO: Waiting for pod pod-configmaps-92c91a4c-9a90-11ea-b520-0242ac110018 to disappear May 20 11:53:57.899: INFO: Pod pod-configmaps-92c91a4c-9a90-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:53:57.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-mxvv2" for this suite. May 20 11:54:03.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:54:04.051: INFO: namespace: e2e-tests-secrets-mxvv2, resource: bindings, ignored listing per whitelist May 20 11:54:04.061: INFO: namespace e2e-tests-secrets-mxvv2 deletion completed in 6.125489404s • [SLOW TEST:12.528 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:54:04.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 20 11:54:04.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 20 11:54:04.286: INFO: stderr: "" May 20 11:54:04.286: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:54:04.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fzl2x" for this suite. May 20 11:54:10.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:54:10.336: INFO: namespace: e2e-tests-kubectl-fzl2x, resource: bindings, ignored listing per whitelist May 20 11:54:10.570: INFO: namespace e2e-tests-kubectl-fzl2x deletion completed in 6.280125989s • [SLOW TEST:6.509 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:54:10.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace May 20 11:54:15.042: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:54:39.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-grmzs" for this suite. May 20 11:54:45.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:54:45.240: INFO: namespace: e2e-tests-namespaces-grmzs, resource: bindings, ignored listing per whitelist May 20 11:54:45.247: INFO: namespace e2e-tests-namespaces-grmzs deletion completed in 6.071823765s STEP: Destroying namespace "e2e-tests-nsdeletetest-sq5tk" for this suite. May 20 11:54:45.249: INFO: Namespace e2e-tests-nsdeletetest-sq5tk was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-f4mkd" for this suite. May 20 11:54:51.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:54:51.313: INFO: namespace: e2e-tests-nsdeletetest-f4mkd, resource: bindings, ignored listing per whitelist May 20 11:54:51.334: INFO: namespace e2e-tests-nsdeletetest-f4mkd deletion completed in 6.085361085s • [SLOW TEST:40.764 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:54:51.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 20 11:54:56.241: INFO: Successfully updated pod "pod-update-activedeadlineseconds-b67967a0-9a90-11ea-b520-0242ac110018" May 20 11:54:56.241: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-b67967a0-9a90-11ea-b520-0242ac110018" in namespace "e2e-tests-pods-hl2qg" to be "terminated due to deadline exceeded" May 20 11:54:56.274: INFO: Pod "pod-update-activedeadlineseconds-b67967a0-9a90-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 33.138376ms May 20 11:54:58.355: INFO: Pod "pod-update-activedeadlineseconds-b67967a0-9a90-11ea-b520-0242ac110018": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.113901557s May 20 11:54:58.355: INFO: Pod "pod-update-activedeadlineseconds-b67967a0-9a90-11ea-b520-0242ac110018" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:54:58.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-hl2qg" for this suite. May 20 11:55:04.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:55:04.433: INFO: namespace: e2e-tests-pods-hl2qg, resource: bindings, ignored listing per whitelist May 20 11:55:04.462: INFO: namespace e2e-tests-pods-hl2qg deletion completed in 6.104460329s • [SLOW TEST:13.128 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:55:04.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-be369de9-9a90-11ea-b520-0242ac110018 STEP: Creating a pod to test consume secrets May 20 11:55:04.596: INFO: Waiting up to 5m0s for pod "pod-secrets-be38d603-9a90-11ea-b520-0242ac110018" in namespace "e2e-tests-secrets-tphbs" to be "success or failure" May 20 11:55:04.602: INFO: Pod "pod-secrets-be38d603-9a90-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.944393ms May 20 11:55:06.606: INFO: Pod "pod-secrets-be38d603-9a90-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009649358s May 20 11:55:08.610: INFO: Pod "pod-secrets-be38d603-9a90-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013665878s STEP: Saw pod success May 20 11:55:08.610: INFO: Pod "pod-secrets-be38d603-9a90-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:55:08.612: INFO: Trying to get logs from node hunter-worker pod pod-secrets-be38d603-9a90-11ea-b520-0242ac110018 container secret-volume-test: STEP: delete the pod May 20 11:55:08.633: INFO: Waiting for pod pod-secrets-be38d603-9a90-11ea-b520-0242ac110018 to disappear May 20 11:55:08.637: INFO: Pod pod-secrets-be38d603-9a90-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:55:08.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tphbs" for this suite. May 20 11:55:14.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:55:14.698: INFO: namespace: e2e-tests-secrets-tphbs, resource: bindings, ignored listing per whitelist May 20 11:55:14.728: INFO: namespace e2e-tests-secrets-tphbs deletion completed in 6.088044031s • [SLOW TEST:10.266 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:55:14.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller May 20 11:55:14.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-c4g7p' May 20 11:55:17.317: INFO: stderr: "" May 20 11:55:17.317: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 20 11:55:17.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-c4g7p' May 20 11:55:17.447: INFO: stderr: "" May 20 11:55:17.447: INFO: stdout: "update-demo-nautilus-26b6c update-demo-nautilus-6pmzh " May 20 11:55:17.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-26b6c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c4g7p' May 20 11:55:17.540: INFO: stderr: "" May 20 11:55:17.540: INFO: stdout: "" May 20 11:55:17.540: INFO: update-demo-nautilus-26b6c is created but not running May 20 11:55:22.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-c4g7p' May 20 11:55:22.655: INFO: stderr: "" May 20 11:55:22.655: INFO: stdout: "update-demo-nautilus-26b6c update-demo-nautilus-6pmzh " May 20 11:55:22.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-26b6c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c4g7p' May 20 11:55:22.747: INFO: stderr: "" May 20 11:55:22.747: INFO: stdout: "true" May 20 11:55:22.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-26b6c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c4g7p' May 20 11:55:22.851: INFO: stderr: "" May 20 11:55:22.851: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 11:55:22.851: INFO: validating pod update-demo-nautilus-26b6c May 20 11:55:22.855: INFO: got data: { "image": "nautilus.jpg" } May 20 11:55:22.855: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 11:55:22.855: INFO: update-demo-nautilus-26b6c is verified up and running May 20 11:55:22.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pmzh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c4g7p' May 20 11:55:22.955: INFO: stderr: "" May 20 11:55:22.955: INFO: stdout: "true" May 20 11:55:22.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pmzh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c4g7p' May 20 11:55:23.052: INFO: stderr: "" May 20 11:55:23.052: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 11:55:23.052: INFO: validating pod update-demo-nautilus-6pmzh May 20 11:55:23.055: INFO: got data: { "image": "nautilus.jpg" } May 20 11:55:23.055: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 11:55:23.055: INFO: update-demo-nautilus-6pmzh is verified up and running STEP: rolling-update to new replication controller May 20 11:55:23.057: INFO: scanned /root for discovery docs: May 20 11:55:23.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-c4g7p' May 20 11:55:45.723: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 20 11:55:45.723: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 20 11:55:45.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-c4g7p' May 20 11:55:45.827: INFO: stderr: "" May 20 11:55:45.827: INFO: stdout: "update-demo-kitten-g4kps update-demo-kitten-xcr8f " May 20 11:55:45.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-g4kps -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c4g7p' May 20 11:55:45.940: INFO: stderr: "" May 20 11:55:45.940: INFO: stdout: "true" May 20 11:55:45.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-g4kps -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c4g7p' May 20 11:55:46.029: INFO: stderr: "" May 20 11:55:46.029: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 20 11:55:46.029: INFO: validating pod update-demo-kitten-g4kps May 20 11:55:46.045: INFO: got data: { "image": "kitten.jpg" } May 20 11:55:46.045: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 20 11:55:46.045: INFO: update-demo-kitten-g4kps is verified up and running May 20 11:55:46.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xcr8f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c4g7p' May 20 11:55:46.153: INFO: stderr: "" May 20 11:55:46.153: INFO: stdout: "true" May 20 11:55:46.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xcr8f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c4g7p' May 20 11:55:46.248: INFO: stderr: "" May 20 11:55:46.248: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 20 11:55:46.248: INFO: validating pod update-demo-kitten-xcr8f May 20 11:55:46.253: INFO: got data: { "image": "kitten.jpg" } May 20 11:55:46.253: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 20 11:55:46.253: INFO: update-demo-kitten-xcr8f is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:55:46.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-c4g7p" for this suite. May 20 11:56:10.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:56:10.296: INFO: namespace: e2e-tests-kubectl-c4g7p, resource: bindings, ignored listing per whitelist May 20 11:56:10.343: INFO: namespace e2e-tests-kubectl-c4g7p deletion completed in 24.086503319s • [SLOW TEST:55.615 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:56:10.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-8hb4l May 20 11:56:14.470: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-8hb4l STEP: checking the pod's current state and verifying that restartCount is present May 20 11:56:14.473: INFO: Initial restart count of pod liveness-http is 0 May 20 11:56:36.686: INFO: Restart count of pod e2e-tests-container-probe-8hb4l/liveness-http is now 1 (22.212511847s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:56:36.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-8hb4l" for this suite. May 20 11:56:43.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:56:43.171: INFO: namespace: e2e-tests-container-probe-8hb4l, resource: bindings, ignored listing per whitelist May 20 11:56:43.193: INFO: namespace e2e-tests-container-probe-8hb4l deletion completed in 6.267095873s • [SLOW TEST:32.850 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:56:43.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 20 11:56:43.384: INFO: Waiting up to 5m0s for pod "pod-f91c9bba-9a90-11ea-b520-0242ac110018" in namespace "e2e-tests-emptydir-mppn8" to be "success or failure" May 20 11:56:43.388: INFO: Pod "pod-f91c9bba-9a90-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.643108ms May 20 11:56:45.775: INFO: Pod "pod-f91c9bba-9a90-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.391230561s May 20 11:56:47.779: INFO: Pod "pod-f91c9bba-9a90-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.395529723s May 20 11:56:49.784: INFO: Pod "pod-f91c9bba-9a90-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.400223894s STEP: Saw pod success May 20 11:56:49.784: INFO: Pod "pod-f91c9bba-9a90-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:56:49.787: INFO: Trying to get logs from node hunter-worker pod pod-f91c9bba-9a90-11ea-b520-0242ac110018 container test-container: STEP: delete the pod May 20 11:56:49.920: INFO: Waiting for pod pod-f91c9bba-9a90-11ea-b520-0242ac110018 to disappear May 20 11:56:50.051: INFO: Pod pod-f91c9bba-9a90-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:56:50.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mppn8" for this suite. May 20 11:56:56.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:56:56.120: INFO: namespace: e2e-tests-emptydir-mppn8, resource: bindings, ignored listing per whitelist May 20 11:56:56.134: INFO: namespace e2e-tests-emptydir-mppn8 deletion completed in 6.079402906s • [SLOW TEST:12.941 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:56:56.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-4r4xl STEP: creating a selector STEP: Creating the service pods in kubernetes May 20 11:56:56.447: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 20 11:57:24.695: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.140 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-4r4xl PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 11:57:24.695: INFO: >>> kubeConfig: /root/.kube/config I0520 11:57:24.732626 7 log.go:172] (0xc000998370) (0xc0012f26e0) Create stream I0520 11:57:24.732664 7 log.go:172] (0xc000998370) (0xc0012f26e0) Stream added, broadcasting: 1 I0520 11:57:24.734850 7 log.go:172] (0xc000998370) Reply frame received for 1 I0520 11:57:24.734896 7 log.go:172] (0xc000998370) (0xc0008e0820) Create stream I0520 11:57:24.734914 7 log.go:172] (0xc000998370) (0xc0008e0820) Stream added, broadcasting: 3 I0520 11:57:24.736035 7 log.go:172] (0xc000998370) Reply frame received for 3 I0520 11:57:24.736073 7 log.go:172] (0xc000998370) (0xc001dce500) Create stream I0520 11:57:24.736085 7 log.go:172] (0xc000998370) (0xc001dce500) Stream added, broadcasting: 5 I0520 11:57:24.737097 7 log.go:172] (0xc000998370) Reply frame received for 5 I0520 11:57:25.925804 7 log.go:172] (0xc000998370) Data frame received for 3 I0520 11:57:25.925868 7 log.go:172] (0xc0008e0820) (3) Data frame handling I0520 11:57:25.925897 7 log.go:172] (0xc0008e0820) (3) Data frame sent I0520 11:57:25.925909 7 log.go:172] (0xc000998370) Data frame received for 3 I0520 11:57:25.925921 7 log.go:172] (0xc0008e0820) (3) Data frame handling I0520 11:57:25.925991 7 log.go:172] (0xc000998370) Data frame received for 5 I0520 11:57:25.926034 7 log.go:172] (0xc001dce500) (5) Data frame handling I0520 11:57:25.928058 7 log.go:172] (0xc000998370) Data frame received for 1 I0520 11:57:25.928082 7 log.go:172] (0xc0012f26e0) (1) Data frame handling I0520 11:57:25.928095 7 log.go:172] (0xc0012f26e0) (1) Data frame sent I0520 11:57:25.928106 7 log.go:172] (0xc000998370) (0xc0012f26e0) Stream removed, broadcasting: 1 I0520 11:57:25.928218 7 log.go:172] (0xc000998370) (0xc0012f26e0) Stream removed, broadcasting: 1 I0520 11:57:25.928232 7 log.go:172] (0xc000998370) (0xc0008e0820) Stream removed, broadcasting: 3 I0520 11:57:25.928605 7 log.go:172] (0xc000998370) Go away received I0520 11:57:25.928669 7 log.go:172] (0xc000998370) (0xc001dce500) Stream removed, broadcasting: 5 May 20 11:57:25.928: INFO: Found all expected endpoints: [netserver-0] May 20 11:57:25.931: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.168 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-4r4xl PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 11:57:25.931: INFO: >>> kubeConfig: /root/.kube/config I0520 11:57:25.958016 7 log.go:172] (0xc000998840) (0xc0012f2aa0) Create stream I0520 11:57:25.958045 7 log.go:172] (0xc000998840) (0xc0012f2aa0) Stream added, broadcasting: 1 I0520 11:57:25.959599 7 log.go:172] (0xc000998840) Reply frame received for 1 I0520 11:57:25.959628 7 log.go:172] (0xc000998840) (0xc0007646e0) Create stream I0520 11:57:25.959638 7 log.go:172] (0xc000998840) (0xc0007646e0) Stream added, broadcasting: 3 I0520 11:57:25.960443 7 log.go:172] (0xc000998840) Reply frame received for 3 I0520 11:57:25.960467 7 log.go:172] (0xc000998840) (0xc001dce5a0) Create stream I0520 11:57:25.960477 7 log.go:172] (0xc000998840) (0xc001dce5a0) Stream added, broadcasting: 5 I0520 11:57:25.961435 7 log.go:172] (0xc000998840) Reply frame received for 5 I0520 11:57:27.022461 7 log.go:172] (0xc000998840) Data frame received for 3 I0520 11:57:27.022567 7 log.go:172] (0xc0007646e0) (3) Data frame handling I0520 11:57:27.022643 7 log.go:172] (0xc0007646e0) (3) Data frame sent I0520 11:57:27.022802 7 log.go:172] (0xc000998840) Data frame received for 5 I0520 11:57:27.022836 7 log.go:172] (0xc001dce5a0) (5) Data frame handling I0520 11:57:27.023170 7 log.go:172] (0xc000998840) Data frame received for 3 I0520 11:57:27.023201 7 log.go:172] (0xc0007646e0) (3) Data frame handling I0520 11:57:27.025108 7 log.go:172] (0xc000998840) Data frame received for 1 I0520 11:57:27.025408 7 log.go:172] (0xc0012f2aa0) (1) Data frame handling I0520 11:57:27.025481 7 log.go:172] (0xc0012f2aa0) (1) Data frame sent I0520 11:57:27.025640 7 log.go:172] (0xc000998840) (0xc0012f2aa0) Stream removed, broadcasting: 1 I0520 11:57:27.025677 7 log.go:172] (0xc000998840) Go away received I0520 11:57:27.025982 7 log.go:172] (0xc000998840) (0xc0012f2aa0) Stream removed, broadcasting: 1 I0520 11:57:27.026015 7 log.go:172] (0xc000998840) (0xc0007646e0) Stream removed, broadcasting: 3 I0520 11:57:27.026054 7 log.go:172] (0xc000998840) (0xc001dce5a0) Stream removed, broadcasting: 5 May 20 11:57:27.026: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:57:27.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-4r4xl" for this suite. May 20 11:57:51.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:57:51.109: INFO: namespace: e2e-tests-pod-network-test-4r4xl, resource: bindings, ignored listing per whitelist May 20 11:57:51.119: INFO: namespace e2e-tests-pod-network-test-4r4xl deletion completed in 24.088613527s • [SLOW TEST:54.984 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:57:51.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 20 11:57:51.214: INFO: Waiting up to 5m0s for pod "downwardapi-volume-218c83a7-9a91-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-h2slf" to be "success or failure" May 20 11:57:51.238: INFO: Pod "downwardapi-volume-218c83a7-9a91-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.491015ms May 20 11:57:53.242: INFO: Pod "downwardapi-volume-218c83a7-9a91-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027310346s May 20 11:57:55.245: INFO: Pod "downwardapi-volume-218c83a7-9a91-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030315183s STEP: Saw pod success May 20 11:57:55.245: INFO: Pod "downwardapi-volume-218c83a7-9a91-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:57:55.247: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-218c83a7-9a91-11ea-b520-0242ac110018 container client-container: STEP: delete the pod May 20 11:57:55.416: INFO: Waiting for pod downwardapi-volume-218c83a7-9a91-11ea-b520-0242ac110018 to disappear May 20 11:57:55.427: INFO: Pod downwardapi-volume-218c83a7-9a91-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:57:55.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h2slf" for this suite. May 20 11:58:01.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:58:01.493: INFO: namespace: e2e-tests-projected-h2slf, resource: bindings, ignored listing per whitelist May 20 11:58:01.514: INFO: namespace e2e-tests-projected-h2slf deletion completed in 6.08341289s • [SLOW TEST:10.395 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:58:01.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 20 11:58:08.196: INFO: Successfully updated pod "labelsupdate27c5353b-9a91-11ea-b520-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:58:10.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ns24k" for this suite. May 20 11:58:32.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:58:32.355: INFO: namespace: e2e-tests-projected-ns24k, resource: bindings, ignored listing per whitelist May 20 11:58:32.374: INFO: namespace e2e-tests-projected-ns24k deletion completed in 22.108456233s • [SLOW TEST:30.860 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:58:32.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 20 11:58:32.527: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-bg5dx,SelfLink:/api/v1/namespaces/e2e-tests-watch-bg5dx/configmaps/e2e-watch-test-resource-version,UID:3a2a0ad7-9a91-11ea-99e8-0242ac110002,ResourceVersion:11571909,Generation:0,CreationTimestamp:2020-05-20 11:58:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 20 11:58:32.527: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-bg5dx,SelfLink:/api/v1/namespaces/e2e-tests-watch-bg5dx/configmaps/e2e-watch-test-resource-version,UID:3a2a0ad7-9a91-11ea-99e8-0242ac110002,ResourceVersion:11571910,Generation:0,CreationTimestamp:2020-05-20 11:58:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:58:32.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-bg5dx" for this suite. May 20 11:58:38.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:58:38.660: INFO: namespace: e2e-tests-watch-bg5dx, resource: bindings, ignored listing per whitelist May 20 11:58:38.737: INFO: namespace e2e-tests-watch-bg5dx deletion completed in 6.193690364s • [SLOW TEST:6.363 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:58:38.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server May 20 11:58:38.935: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:58:39.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zj2hb" for this suite. May 20 11:58:45.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:58:45.141: INFO: namespace: e2e-tests-kubectl-zj2hb, resource: bindings, ignored listing per whitelist May 20 11:58:45.200: INFO: namespace e2e-tests-kubectl-zj2hb deletion completed in 6.111021037s • [SLOW TEST:6.463 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:58:45.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 20 11:58:45.467: INFO: Waiting up to 5m0s for pod "downward-api-41e292fa-9a91-11ea-b520-0242ac110018" in namespace "e2e-tests-downward-api-kdn5p" to be "success or failure" May 20 11:58:45.470: INFO: Pod "downward-api-41e292fa-9a91-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.248424ms May 20 11:58:47.556: INFO: Pod "downward-api-41e292fa-9a91-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08849494s May 20 11:58:49.560: INFO: Pod "downward-api-41e292fa-9a91-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092979237s STEP: Saw pod success May 20 11:58:49.560: INFO: Pod "downward-api-41e292fa-9a91-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:58:49.563: INFO: Trying to get logs from node hunter-worker2 pod downward-api-41e292fa-9a91-11ea-b520-0242ac110018 container dapi-container: STEP: delete the pod May 20 11:58:49.591: INFO: Waiting for pod downward-api-41e292fa-9a91-11ea-b520-0242ac110018 to disappear May 20 11:58:49.596: INFO: Pod downward-api-41e292fa-9a91-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:58:49.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-kdn5p" for this suite. May 20 11:58:55.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:58:55.706: INFO: namespace: e2e-tests-downward-api-kdn5p, resource: bindings, ignored listing per whitelist May 20 11:58:55.706: INFO: namespace e2e-tests-downward-api-kdn5p deletion completed in 6.090353396s • [SLOW TEST:10.506 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:58:55.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 20 11:58:55.800: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 11:58:55.821: INFO: Waiting for terminating namespaces to be deleted... May 20 11:58:55.823: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 20 11:58:55.830: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 20 11:58:55.830: INFO: Container kube-proxy ready: true, restart count 0 May 20 11:58:55.830: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 20 11:58:55.830: INFO: Container kindnet-cni ready: true, restart count 0 May 20 11:58:55.830: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 20 11:58:55.830: INFO: Container coredns ready: true, restart count 0 May 20 11:58:55.830: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 20 11:58:55.834: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 20 11:58:55.834: INFO: Container kindnet-cni ready: true, restart count 0 May 20 11:58:55.834: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 20 11:58:55.834: INFO: Container coredns ready: true, restart count 0 May 20 11:58:55.834: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 20 11:58:55.834: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4a7bec73-9a91-11ea-b520-0242ac110018 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-4a7bec73-9a91-11ea-b520-0242ac110018 off the node hunter-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-4a7bec73-9a91-11ea-b520-0242ac110018 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:59:03.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-jcnpq" for this suite. May 20 11:59:11.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:59:12.083: INFO: namespace: e2e-tests-sched-pred-jcnpq, resource: bindings, ignored listing per whitelist May 20 11:59:12.089: INFO: namespace e2e-tests-sched-pred-jcnpq deletion completed in 8.104291262s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:16.383 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:59:12.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 20 11:59:12.212: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51d4a991-9a91-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-lswc5" to be "success or failure" May 20 11:59:12.244: INFO: Pod "downwardapi-volume-51d4a991-9a91-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 32.488746ms May 20 11:59:14.248: INFO: Pod "downwardapi-volume-51d4a991-9a91-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036802976s May 20 11:59:16.253: INFO: Pod "downwardapi-volume-51d4a991-9a91-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041284547s STEP: Saw pod success May 20 11:59:16.253: INFO: Pod "downwardapi-volume-51d4a991-9a91-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:59:16.256: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-51d4a991-9a91-11ea-b520-0242ac110018 container client-container: STEP: delete the pod May 20 11:59:16.274: INFO: Waiting for pod downwardapi-volume-51d4a991-9a91-11ea-b520-0242ac110018 to disappear May 20 11:59:16.279: INFO: Pod downwardapi-volume-51d4a991-9a91-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:59:16.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lswc5" for this suite. May 20 11:59:22.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:59:22.395: INFO: namespace: e2e-tests-projected-lswc5, resource: bindings, ignored listing per whitelist May 20 11:59:22.432: INFO: namespace e2e-tests-projected-lswc5 deletion completed in 6.149378757s • [SLOW TEST:10.342 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:59:22.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 20 11:59:22.607: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:59:28.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-nhcdl" for this suite. May 20 11:59:34.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:59:34.626: INFO: namespace: e2e-tests-init-container-nhcdl, resource: bindings, ignored listing per whitelist May 20 11:59:34.647: INFO: namespace e2e-tests-init-container-nhcdl deletion completed in 6.108999878s • [SLOW TEST:12.215 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:59:34.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 20 11:59:34.760: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5f42bb6c-9a91-11ea-b520-0242ac110018" in namespace "e2e-tests-downward-api-dbfxr" to be "success or failure" May 20 11:59:34.771: INFO: Pod "downwardapi-volume-5f42bb6c-9a91-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 11.002648ms May 20 11:59:36.775: INFO: Pod "downwardapi-volume-5f42bb6c-9a91-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015159446s May 20 11:59:38.934: INFO: Pod "downwardapi-volume-5f42bb6c-9a91-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.174129447s STEP: Saw pod success May 20 11:59:38.934: INFO: Pod "downwardapi-volume-5f42bb6c-9a91-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 11:59:38.936: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-5f42bb6c-9a91-11ea-b520-0242ac110018 container client-container: STEP: delete the pod May 20 11:59:38.972: INFO: Waiting for pod downwardapi-volume-5f42bb6c-9a91-11ea-b520-0242ac110018 to disappear May 20 11:59:39.011: INFO: Pod downwardapi-volume-5f42bb6c-9a91-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:59:39.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-dbfxr" for this suite. May 20 11:59:45.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:59:45.062: INFO: namespace: e2e-tests-downward-api-dbfxr, resource: bindings, ignored listing per whitelist May 20 11:59:45.165: INFO: namespace e2e-tests-downward-api-dbfxr deletion completed in 6.149716558s • [SLOW TEST:10.518 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:59:45.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 20 11:59:45.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-tncsv' May 20 11:59:45.383: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 20 11:59:45.383: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 20 11:59:45.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-tncsv' May 20 11:59:45.666: INFO: stderr: "" May 20 11:59:45.666: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 11:59:45.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tncsv" for this suite. May 20 11:59:51.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 11:59:51.901: INFO: namespace: e2e-tests-kubectl-tncsv, resource: bindings, ignored listing per whitelist May 20 11:59:51.931: INFO: namespace e2e-tests-kubectl-tncsv deletion completed in 6.244169875s • [SLOW TEST:6.765 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 11:59:51.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0520 12:00:05.882435 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 20 12:00:05.882: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:00:05.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-c6zpn" for this suite. May 20 12:00:16.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:00:16.417: INFO: namespace: e2e-tests-gc-c6zpn, resource: bindings, ignored listing per whitelist May 20 12:00:16.424: INFO: namespace e2e-tests-gc-c6zpn deletion completed in 10.333121267s • [SLOW TEST:24.493 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:00:16.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-782f7ab0-9a91-11ea-b520-0242ac110018 STEP: Creating a pod to test consume configMaps May 20 12:00:17.012: INFO: Waiting up to 5m0s for pod "pod-configmaps-783c01ca-9a91-11ea-b520-0242ac110018" in namespace "e2e-tests-configmap-jxbvt" to be "success or failure" May 20 12:00:17.030: INFO: Pod "pod-configmaps-783c01ca-9a91-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.034076ms May 20 12:00:19.034: INFO: Pod "pod-configmaps-783c01ca-9a91-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022129055s May 20 12:00:21.060: INFO: Pod "pod-configmaps-783c01ca-9a91-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047842312s STEP: Saw pod success May 20 12:00:21.060: INFO: Pod "pod-configmaps-783c01ca-9a91-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:00:21.063: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-783c01ca-9a91-11ea-b520-0242ac110018 container configmap-volume-test: STEP: delete the pod May 20 12:00:21.094: INFO: Waiting for pod pod-configmaps-783c01ca-9a91-11ea-b520-0242ac110018 to disappear May 20 12:00:21.102: INFO: Pod pod-configmaps-783c01ca-9a91-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:00:21.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-jxbvt" for this suite. May 20 12:00:27.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:00:27.194: INFO: namespace: e2e-tests-configmap-jxbvt, resource: bindings, ignored listing per whitelist May 20 12:00:27.205: INFO: namespace e2e-tests-configmap-jxbvt deletion completed in 6.099930061s • [SLOW TEST:10.780 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:00:27.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-psnbk STEP: creating a selector STEP: Creating the service pods in kubernetes May 20 12:00:27.284: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 20 12:00:53.466: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.153:8080/dial?request=hostName&protocol=http&host=10.244.2.178&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-psnbk PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 12:00:53.467: INFO: >>> kubeConfig: /root/.kube/config I0520 12:00:53.496257 7 log.go:172] (0xc0008dbad0) (0xc0019601e0) Create stream I0520 12:00:53.496290 7 log.go:172] (0xc0008dbad0) (0xc0019601e0) Stream added, broadcasting: 1 I0520 12:00:53.498253 7 log.go:172] (0xc0008dbad0) Reply frame received for 1 I0520 12:00:53.498300 7 log.go:172] (0xc0008dbad0) (0xc0022f9220) Create stream I0520 12:00:53.498314 7 log.go:172] (0xc0008dbad0) (0xc0022f9220) Stream added, broadcasting: 3 I0520 12:00:53.499341 7 log.go:172] (0xc0008dbad0) Reply frame received for 3 I0520 12:00:53.499388 7 log.go:172] (0xc0008dbad0) (0xc0022f92c0) Create stream I0520 12:00:53.499400 7 log.go:172] (0xc0008dbad0) (0xc0022f92c0) Stream added, broadcasting: 5 I0520 12:00:53.500706 7 log.go:172] (0xc0008dbad0) Reply frame received for 5 I0520 12:00:53.601676 7 log.go:172] (0xc0008dbad0) Data frame received for 3 I0520 12:00:53.601705 7 log.go:172] (0xc0022f9220) (3) Data frame handling I0520 12:00:53.601719 7 log.go:172] (0xc0022f9220) (3) Data frame sent I0520 12:00:53.602628 7 log.go:172] (0xc0008dbad0) Data frame received for 5 I0520 12:00:53.602641 7 log.go:172] (0xc0022f92c0) (5) Data frame handling I0520 12:00:53.602770 7 log.go:172] (0xc0008dbad0) Data frame received for 3 I0520 12:00:53.602786 7 log.go:172] (0xc0022f9220) (3) Data frame handling I0520 12:00:53.604786 7 log.go:172] (0xc0008dbad0) Data frame received for 1 I0520 12:00:53.604800 7 log.go:172] (0xc0019601e0) (1) Data frame handling I0520 12:00:53.604806 7 log.go:172] (0xc0019601e0) (1) Data frame sent I0520 12:00:53.604816 7 log.go:172] (0xc0008dbad0) (0xc0019601e0) Stream removed, broadcasting: 1 I0520 12:00:53.604876 7 log.go:172] (0xc0008dbad0) Go away received I0520 12:00:53.604929 7 log.go:172] (0xc0008dbad0) (0xc0019601e0) Stream removed, broadcasting: 1 I0520 12:00:53.604973 7 log.go:172] (0xc0008dbad0) (0xc0022f9220) Stream removed, broadcasting: 3 I0520 12:00:53.604996 7 log.go:172] (0xc0008dbad0) (0xc0022f92c0) Stream removed, broadcasting: 5 May 20 12:00:53.605: INFO: Waiting for endpoints: map[] May 20 12:00:53.608: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.153:8080/dial?request=hostName&protocol=http&host=10.244.1.152&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-psnbk PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 12:00:53.609: INFO: >>> kubeConfig: /root/.kube/config I0520 12:00:53.639296 7 log.go:172] (0xc001ba42c0) (0xc002238a00) Create stream I0520 12:00:53.639324 7 log.go:172] (0xc001ba42c0) (0xc002238a00) Stream added, broadcasting: 1 I0520 12:00:53.641433 7 log.go:172] (0xc001ba42c0) Reply frame received for 1 I0520 12:00:53.641490 7 log.go:172] (0xc001ba42c0) (0xc0019603c0) Create stream I0520 12:00:53.641508 7 log.go:172] (0xc001ba42c0) (0xc0019603c0) Stream added, broadcasting: 3 I0520 12:00:53.642642 7 log.go:172] (0xc001ba42c0) Reply frame received for 3 I0520 12:00:53.642727 7 log.go:172] (0xc001ba42c0) (0xc002238b40) Create stream I0520 12:00:53.642749 7 log.go:172] (0xc001ba42c0) (0xc002238b40) Stream added, broadcasting: 5 I0520 12:00:53.643807 7 log.go:172] (0xc001ba42c0) Reply frame received for 5 I0520 12:00:53.721999 7 log.go:172] (0xc001ba42c0) Data frame received for 3 I0520 12:00:53.722051 7 log.go:172] (0xc0019603c0) (3) Data frame handling I0520 12:00:53.722069 7 log.go:172] (0xc0019603c0) (3) Data frame sent I0520 12:00:53.722750 7 log.go:172] (0xc001ba42c0) Data frame received for 5 I0520 12:00:53.722768 7 log.go:172] (0xc002238b40) (5) Data frame handling I0520 12:00:53.722948 7 log.go:172] (0xc001ba42c0) Data frame received for 3 I0520 12:00:53.722967 7 log.go:172] (0xc0019603c0) (3) Data frame handling I0520 12:00:53.724327 7 log.go:172] (0xc001ba42c0) Data frame received for 1 I0520 12:00:53.724355 7 log.go:172] (0xc002238a00) (1) Data frame handling I0520 12:00:53.724391 7 log.go:172] (0xc002238a00) (1) Data frame sent I0520 12:00:53.724413 7 log.go:172] (0xc001ba42c0) (0xc002238a00) Stream removed, broadcasting: 1 I0520 12:00:53.724435 7 log.go:172] (0xc001ba42c0) Go away received I0520 12:00:53.724515 7 log.go:172] (0xc001ba42c0) (0xc002238a00) Stream removed, broadcasting: 1 I0520 12:00:53.724549 7 log.go:172] (0xc001ba42c0) (0xc0019603c0) Stream removed, broadcasting: 3 I0520 12:00:53.724564 7 log.go:172] (0xc001ba42c0) (0xc002238b40) Stream removed, broadcasting: 5 May 20 12:00:53.724: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:00:53.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-psnbk" for this suite. May 20 12:01:17.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:01:17.827: INFO: namespace: e2e-tests-pod-network-test-psnbk, resource: bindings, ignored listing per whitelist May 20 12:01:17.828: INFO: namespace e2e-tests-pod-network-test-psnbk deletion completed in 24.099690375s • [SLOW TEST:50.622 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:01:17.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 20 12:01:17.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-6k8x5' May 20 12:01:18.097: INFO: stderr: "" May 20 12:01:18.097: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 20 12:01:23.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-6k8x5 -o json' May 20 12:01:23.252: INFO: stderr: "" May 20 12:01:23.252: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-20T12:01:18Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-6k8x5\",\n \"resourceVersion\": \"11572701\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-6k8x5/pods/e2e-test-nginx-pod\",\n \"uid\": \"9cdb8965-9a91-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-pmnrz\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-pmnrz\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-pmnrz\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-20T12:01:18Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-20T12:01:20Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-20T12:01:20Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-20T12:01:18Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://98b15a80f43c547d3a0fe1f85d2024483e9ef84d41054b3b934151299ac90423\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-20T12:01:20Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.179\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-20T12:01:18Z\"\n }\n}\n" STEP: replace the image in the pod May 20 12:01:23.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-6k8x5' May 20 12:01:23.523: INFO: stderr: "" May 20 12:01:23.523: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 May 20 12:01:23.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-6k8x5' May 20 12:01:31.727: INFO: stderr: "" May 20 12:01:31.727: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:01:31.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6k8x5" for this suite. May 20 12:01:37.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:01:37.807: INFO: namespace: e2e-tests-kubectl-6k8x5, resource: bindings, ignored listing per whitelist May 20 12:01:37.811: INFO: namespace e2e-tests-kubectl-6k8x5 deletion completed in 6.08038369s • [SLOW TEST:19.983 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:01:37.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition May 20 12:01:37.925: INFO: Waiting up to 5m0s for pod "var-expansion-a8af02a6-9a91-11ea-b520-0242ac110018" in namespace "e2e-tests-var-expansion-s68z7" to be "success or failure" May 20 12:01:37.930: INFO: Pod "var-expansion-a8af02a6-9a91-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.852638ms May 20 12:01:39.935: INFO: Pod "var-expansion-a8af02a6-9a91-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009956259s May 20 12:01:41.939: INFO: Pod "var-expansion-a8af02a6-9a91-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014082629s STEP: Saw pod success May 20 12:01:41.939: INFO: Pod "var-expansion-a8af02a6-9a91-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:01:41.942: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-a8af02a6-9a91-11ea-b520-0242ac110018 container dapi-container: STEP: delete the pod May 20 12:01:41.990: INFO: Waiting for pod var-expansion-a8af02a6-9a91-11ea-b520-0242ac110018 to disappear May 20 12:01:42.218: INFO: Pod var-expansion-a8af02a6-9a91-11ea-b520-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:01:42.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-s68z7" for this suite. May 20 12:01:48.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:01:48.292: INFO: namespace: e2e-tests-var-expansion-s68z7, resource: bindings, ignored listing per whitelist May 20 12:01:48.310: INFO: namespace e2e-tests-var-expansion-s68z7 deletion completed in 6.087720603s • [SLOW TEST:10.499 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:01:48.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all May 20 12:01:48.392: INFO: Waiting up to 5m0s for pod "client-containers-aeec3ede-9a91-11ea-b520-0242ac110018" in namespace "e2e-tests-containers-8mpb2" to be "success or failure" May 20 12:01:48.439: INFO: Pod "client-containers-aeec3ede-9a91-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 46.617644ms May 20 12:01:50.443: INFO: Pod "client-containers-aeec3ede-9a91-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050834199s May 20 12:01:52.447: INFO: Pod "client-containers-aeec3ede-9a91-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.054832239s May 20 12:01:54.451: INFO: Pod "client-containers-aeec3ede-9a91-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058311595s STEP: Saw pod success May 20 12:01:54.451: INFO: Pod "client-containers-aeec3ede-9a91-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:01:54.454: INFO: Trying to get logs from node hunter-worker2 pod client-containers-aeec3ede-9a91-11ea-b520-0242ac110018 container test-container: STEP: delete the pod May 20 12:01:54.498: INFO: Waiting for pod client-containers-aeec3ede-9a91-11ea-b520-0242ac110018 to disappear May 20 12:01:54.511: INFO: Pod client-containers-aeec3ede-9a91-11ea-b520-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:01:54.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-8mpb2" for this suite. May 20 12:02:00.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:02:00.595: INFO: namespace: e2e-tests-containers-8mpb2, resource: bindings, ignored listing per whitelist May 20 12:02:00.604: INFO: namespace e2e-tests-containers-8mpb2 deletion completed in 6.088327566s • [SLOW TEST:12.294 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:02:00.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-7d7z9 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-7d7z9 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-7d7z9 May 20 12:02:00.721: INFO: Found 0 stateful pods, waiting for 1 May 20 12:02:10.726: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 20 12:02:10.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7d7z9 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 20 12:02:11.011: INFO: stderr: "I0520 12:02:10.855137 2187 log.go:172] (0xc0008882c0) (0xc000760640) Create stream\nI0520 12:02:10.855199 2187 log.go:172] (0xc0008882c0) (0xc000760640) Stream added, broadcasting: 1\nI0520 12:02:10.857584 2187 log.go:172] (0xc0008882c0) Reply frame received for 1\nI0520 12:02:10.857645 2187 log.go:172] (0xc0008882c0) (0xc000670e60) Create stream\nI0520 12:02:10.857669 2187 log.go:172] (0xc0008882c0) (0xc000670e60) Stream added, broadcasting: 3\nI0520 12:02:10.858535 2187 log.go:172] (0xc0008882c0) Reply frame received for 3\nI0520 12:02:10.858586 2187 log.go:172] (0xc0008882c0) (0xc0007606e0) Create stream\nI0520 12:02:10.858611 2187 log.go:172] (0xc0008882c0) (0xc0007606e0) Stream added, broadcasting: 5\nI0520 12:02:10.859598 2187 log.go:172] (0xc0008882c0) Reply frame received for 5\nI0520 12:02:11.003791 2187 log.go:172] (0xc0008882c0) Data frame received for 3\nI0520 12:02:11.003822 2187 log.go:172] (0xc000670e60) (3) Data frame handling\nI0520 12:02:11.003845 2187 log.go:172] (0xc000670e60) (3) Data frame sent\nI0520 12:02:11.003852 2187 log.go:172] (0xc0008882c0) Data frame received for 3\nI0520 12:02:11.003865 2187 log.go:172] (0xc0008882c0) Data frame received for 5\nI0520 12:02:11.003880 2187 log.go:172] (0xc0007606e0) (5) Data frame handling\nI0520 12:02:11.003901 2187 log.go:172] (0xc000670e60) (3) Data frame handling\nI0520 12:02:11.005703 2187 log.go:172] (0xc0008882c0) Data frame received for 1\nI0520 12:02:11.005718 2187 log.go:172] (0xc000760640) (1) Data frame handling\nI0520 12:02:11.005728 2187 log.go:172] (0xc000760640) (1) Data frame sent\nI0520 12:02:11.005738 2187 log.go:172] (0xc0008882c0) (0xc000760640) Stream removed, broadcasting: 1\nI0520 12:02:11.005746 2187 log.go:172] (0xc0008882c0) Go away received\nI0520 12:02:11.005874 2187 log.go:172] (0xc0008882c0) (0xc000760640) Stream removed, broadcasting: 1\nI0520 12:02:11.005887 2187 log.go:172] (0xc0008882c0) (0xc000670e60) Stream removed, broadcasting: 3\nI0520 12:02:11.005893 2187 log.go:172] (0xc0008882c0) (0xc0007606e0) Stream removed, broadcasting: 5\n" May 20 12:02:11.011: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 20 12:02:11.011: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 20 12:02:11.014: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 20 12:02:21.018: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 20 12:02:21.018: INFO: Waiting for statefulset status.replicas updated to 0 May 20 12:02:21.039: INFO: POD NODE PHASE GRACE CONDITIONS May 20 12:02:21.039: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:00 +0000 UTC }] May 20 12:02:21.039: INFO: May 20 12:02:21.039: INFO: StatefulSet ss has not reached scale 3, at 1 May 20 12:02:22.044: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987107254s May 20 12:02:23.048: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98220635s May 20 12:02:24.053: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.978063613s May 20 12:02:25.123: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.973314436s May 20 12:02:26.128: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.903385796s May 20 12:02:27.132: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.898271139s May 20 12:02:28.138: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.893889466s May 20 12:02:29.143: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.888201426s May 20 12:02:30.149: INFO: Verifying statefulset ss doesn't scale past 3 for another 882.659197ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-7d7z9 May 20 12:02:31.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7d7z9 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:02:31.367: INFO: stderr: "I0520 12:02:31.276155 2209 log.go:172] (0xc000734370) (0xc000776640) Create stream\nI0520 12:02:31.276218 2209 log.go:172] (0xc000734370) (0xc000776640) Stream added, broadcasting: 1\nI0520 12:02:31.278245 2209 log.go:172] (0xc000734370) Reply frame received for 1\nI0520 12:02:31.278280 2209 log.go:172] (0xc000734370) (0xc0005f2c80) Create stream\nI0520 12:02:31.278290 2209 log.go:172] (0xc000734370) (0xc0005f2c80) Stream added, broadcasting: 3\nI0520 12:02:31.279308 2209 log.go:172] (0xc000734370) Reply frame received for 3\nI0520 12:02:31.279340 2209 log.go:172] (0xc000734370) (0xc0005f2dc0) Create stream\nI0520 12:02:31.279360 2209 log.go:172] (0xc000734370) (0xc0005f2dc0) Stream added, broadcasting: 5\nI0520 12:02:31.280186 2209 log.go:172] (0xc000734370) Reply frame received for 5\nI0520 12:02:31.361925 2209 log.go:172] (0xc000734370) Data frame received for 5\nI0520 12:02:31.361972 2209 log.go:172] (0xc0005f2dc0) (5) Data frame handling\nI0520 12:02:31.361998 2209 log.go:172] (0xc000734370) Data frame received for 3\nI0520 12:02:31.362009 2209 log.go:172] (0xc0005f2c80) (3) Data frame handling\nI0520 12:02:31.362021 2209 log.go:172] (0xc0005f2c80) (3) Data frame sent\nI0520 12:02:31.362037 2209 log.go:172] (0xc000734370) Data frame received for 3\nI0520 12:02:31.362060 2209 log.go:172] (0xc0005f2c80) (3) Data frame handling\nI0520 12:02:31.363330 2209 log.go:172] (0xc000734370) Data frame received for 1\nI0520 12:02:31.363357 2209 log.go:172] (0xc000776640) (1) Data frame handling\nI0520 12:02:31.363380 2209 log.go:172] (0xc000776640) (1) Data frame sent\nI0520 12:02:31.363408 2209 log.go:172] (0xc000734370) (0xc000776640) Stream removed, broadcasting: 1\nI0520 12:02:31.363438 2209 log.go:172] (0xc000734370) Go away received\nI0520 12:02:31.363596 2209 log.go:172] (0xc000734370) (0xc000776640) Stream removed, broadcasting: 1\nI0520 12:02:31.363618 2209 log.go:172] (0xc000734370) (0xc0005f2c80) Stream removed, broadcasting: 3\nI0520 12:02:31.363629 2209 log.go:172] (0xc000734370) (0xc0005f2dc0) Stream removed, broadcasting: 5\n" May 20 12:02:31.368: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 20 12:02:31.368: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 20 12:02:31.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7d7z9 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:02:31.582: INFO: stderr: "I0520 12:02:31.506032 2231 log.go:172] (0xc00078e2c0) (0xc0006f0640) Create stream\nI0520 12:02:31.506097 2231 log.go:172] (0xc00078e2c0) (0xc0006f0640) Stream added, broadcasting: 1\nI0520 12:02:31.508381 2231 log.go:172] (0xc00078e2c0) Reply frame received for 1\nI0520 12:02:31.508428 2231 log.go:172] (0xc00078e2c0) (0xc00041cd20) Create stream\nI0520 12:02:31.508463 2231 log.go:172] (0xc00078e2c0) (0xc00041cd20) Stream added, broadcasting: 3\nI0520 12:02:31.509958 2231 log.go:172] (0xc00078e2c0) Reply frame received for 3\nI0520 12:02:31.510012 2231 log.go:172] (0xc00078e2c0) (0xc00041ce60) Create stream\nI0520 12:02:31.510033 2231 log.go:172] (0xc00078e2c0) (0xc00041ce60) Stream added, broadcasting: 5\nI0520 12:02:31.510881 2231 log.go:172] (0xc00078e2c0) Reply frame received for 5\nI0520 12:02:31.575326 2231 log.go:172] (0xc00078e2c0) Data frame received for 3\nI0520 12:02:31.575348 2231 log.go:172] (0xc00041cd20) (3) Data frame handling\nI0520 12:02:31.575358 2231 log.go:172] (0xc00041cd20) (3) Data frame sent\nI0520 12:02:31.575373 2231 log.go:172] (0xc00078e2c0) Data frame received for 3\nI0520 12:02:31.575389 2231 log.go:172] (0xc00041cd20) (3) Data frame handling\nI0520 12:02:31.575715 2231 log.go:172] (0xc00078e2c0) Data frame received for 5\nI0520 12:02:31.575749 2231 log.go:172] (0xc00041ce60) (5) Data frame handling\nI0520 12:02:31.575782 2231 log.go:172] (0xc00041ce60) (5) Data frame sent\nI0520 12:02:31.575796 2231 log.go:172] (0xc00078e2c0) Data frame received for 5\nI0520 12:02:31.575808 2231 log.go:172] (0xc00041ce60) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0520 12:02:31.577467 2231 log.go:172] (0xc00078e2c0) Data frame received for 1\nI0520 12:02:31.577505 2231 log.go:172] (0xc0006f0640) (1) Data frame handling\nI0520 12:02:31.577527 2231 log.go:172] (0xc0006f0640) (1) Data frame sent\nI0520 12:02:31.577550 2231 log.go:172] (0xc00078e2c0) (0xc0006f0640) Stream removed, broadcasting: 1\nI0520 12:02:31.577595 2231 log.go:172] (0xc00078e2c0) Go away received\nI0520 12:02:31.577869 2231 log.go:172] (0xc00078e2c0) (0xc0006f0640) Stream removed, broadcasting: 1\nI0520 12:02:31.577915 2231 log.go:172] (0xc00078e2c0) (0xc00041cd20) Stream removed, broadcasting: 3\nI0520 12:02:31.577942 2231 log.go:172] (0xc00078e2c0) (0xc00041ce60) Stream removed, broadcasting: 5\n" May 20 12:02:31.582: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 20 12:02:31.582: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 20 12:02:31.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7d7z9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:02:31.794: INFO: stderr: "I0520 12:02:31.710770 2254 log.go:172] (0xc00015c6e0) (0xc0007fc640) Create stream\nI0520 12:02:31.710839 2254 log.go:172] (0xc00015c6e0) (0xc0007fc640) Stream added, broadcasting: 1\nI0520 12:02:31.713435 2254 log.go:172] (0xc00015c6e0) Reply frame received for 1\nI0520 12:02:31.713488 2254 log.go:172] (0xc00015c6e0) (0xc000610dc0) Create stream\nI0520 12:02:31.713501 2254 log.go:172] (0xc00015c6e0) (0xc000610dc0) Stream added, broadcasting: 3\nI0520 12:02:31.714347 2254 log.go:172] (0xc00015c6e0) Reply frame received for 3\nI0520 12:02:31.714385 2254 log.go:172] (0xc00015c6e0) (0xc000594000) Create stream\nI0520 12:02:31.714399 2254 log.go:172] (0xc00015c6e0) (0xc000594000) Stream added, broadcasting: 5\nI0520 12:02:31.715174 2254 log.go:172] (0xc00015c6e0) Reply frame received for 5\nI0520 12:02:31.786298 2254 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0520 12:02:31.786333 2254 log.go:172] (0xc000610dc0) (3) Data frame handling\nI0520 12:02:31.786350 2254 log.go:172] (0xc000610dc0) (3) Data frame sent\nI0520 12:02:31.786361 2254 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0520 12:02:31.786371 2254 log.go:172] (0xc000610dc0) (3) Data frame handling\nI0520 12:02:31.786478 2254 log.go:172] (0xc00015c6e0) Data frame received for 5\nI0520 12:02:31.786498 2254 log.go:172] (0xc000594000) (5) Data frame handling\nI0520 12:02:31.786511 2254 log.go:172] (0xc000594000) (5) Data frame sent\nI0520 12:02:31.786522 2254 log.go:172] (0xc00015c6e0) Data frame received for 5\nI0520 12:02:31.786536 2254 log.go:172] (0xc000594000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0520 12:02:31.788347 2254 log.go:172] (0xc00015c6e0) Data frame received for 1\nI0520 12:02:31.788372 2254 log.go:172] (0xc0007fc640) (1) Data frame handling\nI0520 12:02:31.788385 2254 log.go:172] (0xc0007fc640) (1) Data frame sent\nI0520 12:02:31.788403 2254 log.go:172] (0xc00015c6e0) (0xc0007fc640) Stream removed, broadcasting: 1\nI0520 12:02:31.788421 2254 log.go:172] (0xc00015c6e0) Go away received\nI0520 12:02:31.788794 2254 log.go:172] (0xc00015c6e0) (0xc0007fc640) Stream removed, broadcasting: 1\nI0520 12:02:31.788829 2254 log.go:172] (0xc00015c6e0) (0xc000610dc0) Stream removed, broadcasting: 3\nI0520 12:02:31.788847 2254 log.go:172] (0xc00015c6e0) (0xc000594000) Stream removed, broadcasting: 5\n" May 20 12:02:31.794: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 20 12:02:31.794: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 20 12:02:31.817: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 20 12:02:41.822: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 20 12:02:41.822: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 20 12:02:41.822: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 20 12:02:41.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7d7z9 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 20 12:02:42.052: INFO: stderr: "I0520 12:02:41.947935 2278 log.go:172] (0xc00016a840) (0xc000671360) Create stream\nI0520 12:02:41.947992 2278 log.go:172] (0xc00016a840) (0xc000671360) Stream added, broadcasting: 1\nI0520 12:02:41.950482 2278 log.go:172] (0xc00016a840) Reply frame received for 1\nI0520 12:02:41.950518 2278 log.go:172] (0xc00016a840) (0xc000748000) Create stream\nI0520 12:02:41.950534 2278 log.go:172] (0xc00016a840) (0xc000748000) Stream added, broadcasting: 3\nI0520 12:02:41.951412 2278 log.go:172] (0xc00016a840) Reply frame received for 3\nI0520 12:02:41.951484 2278 log.go:172] (0xc00016a840) (0xc000748140) Create stream\nI0520 12:02:41.951510 2278 log.go:172] (0xc00016a840) (0xc000748140) Stream added, broadcasting: 5\nI0520 12:02:41.952539 2278 log.go:172] (0xc00016a840) Reply frame received for 5\nI0520 12:02:42.044481 2278 log.go:172] (0xc00016a840) Data frame received for 5\nI0520 12:02:42.044534 2278 log.go:172] (0xc000748140) (5) Data frame handling\nI0520 12:02:42.044568 2278 log.go:172] (0xc00016a840) Data frame received for 3\nI0520 12:02:42.044584 2278 log.go:172] (0xc000748000) (3) Data frame handling\nI0520 12:02:42.044601 2278 log.go:172] (0xc000748000) (3) Data frame sent\nI0520 12:02:42.044616 2278 log.go:172] (0xc00016a840) Data frame received for 3\nI0520 12:02:42.044630 2278 log.go:172] (0xc000748000) (3) Data frame handling\nI0520 12:02:42.046328 2278 log.go:172] (0xc00016a840) Data frame received for 1\nI0520 12:02:42.046367 2278 log.go:172] (0xc000671360) (1) Data frame handling\nI0520 12:02:42.046390 2278 log.go:172] (0xc000671360) (1) Data frame sent\nI0520 12:02:42.046404 2278 log.go:172] (0xc00016a840) (0xc000671360) Stream removed, broadcasting: 1\nI0520 12:02:42.046445 2278 log.go:172] (0xc00016a840) Go away received\nI0520 12:02:42.046667 2278 log.go:172] (0xc00016a840) (0xc000671360) Stream removed, broadcasting: 1\nI0520 12:02:42.046700 2278 log.go:172] (0xc00016a840) (0xc000748000) Stream removed, broadcasting: 3\nI0520 12:02:42.046732 2278 log.go:172] (0xc00016a840) (0xc000748140) Stream removed, broadcasting: 5\n" May 20 12:02:42.052: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 20 12:02:42.052: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 20 12:02:42.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7d7z9 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 20 12:02:42.283: INFO: stderr: "I0520 12:02:42.177808 2299 log.go:172] (0xc00080ebb0) (0xc0006a1a40) Create stream\nI0520 12:02:42.177863 2299 log.go:172] (0xc00080ebb0) (0xc0006a1a40) Stream added, broadcasting: 1\nI0520 12:02:42.181479 2299 log.go:172] (0xc00080ebb0) Reply frame received for 1\nI0520 12:02:42.181529 2299 log.go:172] (0xc00080ebb0) (0xc0006a0dc0) Create stream\nI0520 12:02:42.181542 2299 log.go:172] (0xc00080ebb0) (0xc0006a0dc0) Stream added, broadcasting: 3\nI0520 12:02:42.182499 2299 log.go:172] (0xc00080ebb0) Reply frame received for 3\nI0520 12:02:42.182548 2299 log.go:172] (0xc00080ebb0) (0xc0006a0f00) Create stream\nI0520 12:02:42.182568 2299 log.go:172] (0xc00080ebb0) (0xc0006a0f00) Stream added, broadcasting: 5\nI0520 12:02:42.183491 2299 log.go:172] (0xc00080ebb0) Reply frame received for 5\nI0520 12:02:42.278883 2299 log.go:172] (0xc00080ebb0) Data frame received for 3\nI0520 12:02:42.278930 2299 log.go:172] (0xc0006a0dc0) (3) Data frame handling\nI0520 12:02:42.278945 2299 log.go:172] (0xc0006a0dc0) (3) Data frame sent\nI0520 12:02:42.278956 2299 log.go:172] (0xc00080ebb0) Data frame received for 3\nI0520 12:02:42.278967 2299 log.go:172] (0xc0006a0dc0) (3) Data frame handling\nI0520 12:02:42.279016 2299 log.go:172] (0xc00080ebb0) Data frame received for 5\nI0520 12:02:42.279042 2299 log.go:172] (0xc0006a0f00) (5) Data frame handling\nI0520 12:02:42.280334 2299 log.go:172] (0xc00080ebb0) Data frame received for 1\nI0520 12:02:42.280346 2299 log.go:172] (0xc0006a1a40) (1) Data frame handling\nI0520 12:02:42.280352 2299 log.go:172] (0xc0006a1a40) (1) Data frame sent\nI0520 12:02:42.280363 2299 log.go:172] (0xc00080ebb0) (0xc0006a1a40) Stream removed, broadcasting: 1\nI0520 12:02:42.280397 2299 log.go:172] (0xc00080ebb0) Go away received\nI0520 12:02:42.280554 2299 log.go:172] (0xc00080ebb0) (0xc0006a1a40) Stream removed, broadcasting: 1\nI0520 12:02:42.280572 2299 log.go:172] (0xc00080ebb0) (0xc0006a0dc0) Stream removed, broadcasting: 3\nI0520 12:02:42.280580 2299 log.go:172] (0xc00080ebb0) (0xc0006a0f00) Stream removed, broadcasting: 5\n" May 20 12:02:42.283: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 20 12:02:42.283: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 20 12:02:42.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7d7z9 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 20 12:02:42.517: INFO: stderr: "I0520 12:02:42.408581 2321 log.go:172] (0xc000162840) (0xc00076a640) Create stream\nI0520 12:02:42.408638 2321 log.go:172] (0xc000162840) (0xc00076a640) Stream added, broadcasting: 1\nI0520 12:02:42.411308 2321 log.go:172] (0xc000162840) Reply frame received for 1\nI0520 12:02:42.411366 2321 log.go:172] (0xc000162840) (0xc0001e4d20) Create stream\nI0520 12:02:42.411385 2321 log.go:172] (0xc000162840) (0xc0001e4d20) Stream added, broadcasting: 3\nI0520 12:02:42.412480 2321 log.go:172] (0xc000162840) Reply frame received for 3\nI0520 12:02:42.412547 2321 log.go:172] (0xc000162840) (0xc00040e000) Create stream\nI0520 12:02:42.412574 2321 log.go:172] (0xc000162840) (0xc00040e000) Stream added, broadcasting: 5\nI0520 12:02:42.413870 2321 log.go:172] (0xc000162840) Reply frame received for 5\nI0520 12:02:42.512458 2321 log.go:172] (0xc000162840) Data frame received for 5\nI0520 12:02:42.512500 2321 log.go:172] (0xc00040e000) (5) Data frame handling\nI0520 12:02:42.512540 2321 log.go:172] (0xc000162840) Data frame received for 3\nI0520 12:02:42.512566 2321 log.go:172] (0xc0001e4d20) (3) Data frame handling\nI0520 12:02:42.512582 2321 log.go:172] (0xc0001e4d20) (3) Data frame sent\nI0520 12:02:42.512714 2321 log.go:172] (0xc000162840) Data frame received for 3\nI0520 12:02:42.512746 2321 log.go:172] (0xc0001e4d20) (3) Data frame handling\nI0520 12:02:42.514760 2321 log.go:172] (0xc000162840) Data frame received for 1\nI0520 12:02:42.514783 2321 log.go:172] (0xc00076a640) (1) Data frame handling\nI0520 12:02:42.514791 2321 log.go:172] (0xc00076a640) (1) Data frame sent\nI0520 12:02:42.514800 2321 log.go:172] (0xc000162840) (0xc00076a640) Stream removed, broadcasting: 1\nI0520 12:02:42.514838 2321 log.go:172] (0xc000162840) Go away received\nI0520 12:02:42.514938 2321 log.go:172] (0xc000162840) (0xc00076a640) Stream removed, broadcasting: 1\nI0520 12:02:42.514951 2321 log.go:172] (0xc000162840) (0xc0001e4d20) Stream removed, broadcasting: 3\nI0520 12:02:42.514957 2321 log.go:172] (0xc000162840) (0xc00040e000) Stream removed, broadcasting: 5\n" May 20 12:02:42.517: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 20 12:02:42.517: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 20 12:02:42.518: INFO: Waiting for statefulset status.replicas updated to 0 May 20 12:02:42.522: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 20 12:02:52.530: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 20 12:02:52.530: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 20 12:02:52.531: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 20 12:02:52.555: INFO: POD NODE PHASE GRACE CONDITIONS May 20 12:02:52.555: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:00 +0000 UTC }] May 20 12:02:52.555: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC }] May 20 12:02:52.555: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC }] May 20 12:02:52.555: INFO: May 20 12:02:52.555: INFO: StatefulSet ss has not reached scale 0, at 3 May 20 12:02:53.692: INFO: POD NODE PHASE GRACE CONDITIONS May 20 12:02:53.692: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:00 +0000 UTC }] May 20 12:02:53.692: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC }] May 20 12:02:53.693: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC }] May 20 12:02:53.693: INFO: May 20 12:02:53.693: INFO: StatefulSet ss has not reached scale 0, at 3 May 20 12:02:54.698: INFO: POD NODE PHASE GRACE CONDITIONS May 20 12:02:54.698: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:00 +0000 UTC }] May 20 12:02:54.698: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC }] May 20 12:02:54.698: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC }] May 20 12:02:54.698: INFO: May 20 12:02:54.698: INFO: StatefulSet ss has not reached scale 0, at 3 May 20 12:02:55.704: INFO: POD NODE PHASE GRACE CONDITIONS May 20 12:02:55.704: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:00 +0000 UTC }] May 20 12:02:55.704: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC }] May 20 12:02:55.704: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC }] May 20 12:02:55.704: INFO: May 20 12:02:55.704: INFO: StatefulSet ss has not reached scale 0, at 3 May 20 12:02:56.709: INFO: POD NODE PHASE GRACE CONDITIONS May 20 12:02:56.709: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC }] May 20 12:02:56.709: INFO: May 20 12:02:56.709: INFO: StatefulSet ss has not reached scale 0, at 1 May 20 12:02:57.714: INFO: POD NODE PHASE GRACE CONDITIONS May 20 12:02:57.714: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC }] May 20 12:02:57.714: INFO: May 20 12:02:57.714: INFO: StatefulSet ss has not reached scale 0, at 1 May 20 12:02:58.718: INFO: POD NODE PHASE GRACE CONDITIONS May 20 12:02:58.718: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC }] May 20 12:02:58.718: INFO: May 20 12:02:58.718: INFO: StatefulSet ss has not reached scale 0, at 1 May 20 12:02:59.723: INFO: POD NODE PHASE GRACE CONDITIONS May 20 12:02:59.723: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC }] May 20 12:02:59.723: INFO: May 20 12:02:59.723: INFO: StatefulSet ss has not reached scale 0, at 1 May 20 12:03:00.728: INFO: POD NODE PHASE GRACE CONDITIONS May 20 12:03:00.728: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:02:21 +0000 UTC }] May 20 12:03:00.728: INFO: May 20 12:03:00.728: INFO: StatefulSet ss has not reached scale 0, at 1 May 20 12:03:01.749: INFO: Verifying statefulset ss doesn't scale past 0 for another 808.628529ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-7d7z9 May 20 12:03:02.755: INFO: Scaling statefulset ss to 0 May 20 12:03:02.763: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 20 12:03:02.764: INFO: Deleting all statefulset in ns e2e-tests-statefulset-7d7z9 May 20 12:03:02.766: INFO: Scaling statefulset ss to 0 May 20 12:03:02.772: INFO: Waiting for statefulset status.replicas updated to 0 May 20 12:03:02.774: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:03:02.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-7d7z9" for this suite. May 20 12:03:08.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:03:08.841: INFO: namespace: e2e-tests-statefulset-7d7z9, resource: bindings, ignored listing per whitelist May 20 12:03:08.904: INFO: namespace e2e-tests-statefulset-7d7z9 deletion completed in 6.116006866s • [SLOW TEST:68.300 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:03:08.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 20 12:03:08.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-fgj4m' May 20 12:03:09.099: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 20 12:03:09.099: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 20 12:03:09.103: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 20 12:03:09.111: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 20 12:03:09.139: INFO: scanned /root for discovery docs: May 20 12:03:09.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-fgj4m' May 20 12:03:26.041: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 20 12:03:26.041: INFO: stdout: "Created e2e-test-nginx-rc-97803108bd5a9441073836e9e7e0e241\nScaling up e2e-test-nginx-rc-97803108bd5a9441073836e9e7e0e241 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-97803108bd5a9441073836e9e7e0e241 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-97803108bd5a9441073836e9e7e0e241 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 20 12:03:26.041: INFO: stdout: "Created e2e-test-nginx-rc-97803108bd5a9441073836e9e7e0e241\nScaling up e2e-test-nginx-rc-97803108bd5a9441073836e9e7e0e241 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-97803108bd5a9441073836e9e7e0e241 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-97803108bd5a9441073836e9e7e0e241 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 20 12:03:26.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-fgj4m' May 20 12:03:26.141: INFO: stderr: "" May 20 12:03:26.141: INFO: stdout: "e2e-test-nginx-rc-97803108bd5a9441073836e9e7e0e241-trqgn " May 20 12:03:26.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-97803108bd5a9441073836e9e7e0e241-trqgn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fgj4m' May 20 12:03:26.260: INFO: stderr: "" May 20 12:03:26.260: INFO: stdout: "true" May 20 12:03:26.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-97803108bd5a9441073836e9e7e0e241-trqgn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fgj4m' May 20 12:03:26.366: INFO: stderr: "" May 20 12:03:26.366: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 20 12:03:26.366: INFO: e2e-test-nginx-rc-97803108bd5a9441073836e9e7e0e241-trqgn is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 May 20 12:03:26.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-fgj4m' May 20 12:03:26.475: INFO: stderr: "" May 20 12:03:26.475: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:03:26.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fgj4m" for this suite. May 20 12:03:32.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:03:32.566: INFO: namespace: e2e-tests-kubectl-fgj4m, resource: bindings, ignored listing per whitelist May 20 12:03:32.598: INFO: namespace e2e-tests-kubectl-fgj4m deletion completed in 6.120154642s • [SLOW TEST:23.694 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:03:32.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments May 20 12:03:32.742: INFO: Waiting up to 5m0s for pod "client-containers-ed19e74f-9a91-11ea-b520-0242ac110018" in namespace "e2e-tests-containers-6cf9b" to be "success or failure" May 20 12:03:32.747: INFO: Pod "client-containers-ed19e74f-9a91-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.39676ms May 20 12:03:34.751: INFO: Pod "client-containers-ed19e74f-9a91-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008609285s May 20 12:03:36.755: INFO: Pod "client-containers-ed19e74f-9a91-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01305771s STEP: Saw pod success May 20 12:03:36.756: INFO: Pod "client-containers-ed19e74f-9a91-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:03:36.758: INFO: Trying to get logs from node hunter-worker2 pod client-containers-ed19e74f-9a91-11ea-b520-0242ac110018 container test-container: STEP: delete the pod May 20 12:03:36.778: INFO: Waiting for pod client-containers-ed19e74f-9a91-11ea-b520-0242ac110018 to disappear May 20 12:03:36.782: INFO: Pod client-containers-ed19e74f-9a91-11ea-b520-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:03:36.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-6cf9b" for this suite. May 20 12:03:42.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:03:42.822: INFO: namespace: e2e-tests-containers-6cf9b, resource: bindings, ignored listing per whitelist May 20 12:03:42.863: INFO: namespace e2e-tests-containers-6cf9b deletion completed in 6.077091218s • [SLOW TEST:10.265 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:03:42.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults May 20 12:03:42.987: INFO: Waiting up to 5m0s for pod "client-containers-f334e977-9a91-11ea-b520-0242ac110018" in namespace "e2e-tests-containers-9779b" to be "success or failure" May 20 12:03:42.993: INFO: Pod "client-containers-f334e977-9a91-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.475109ms May 20 12:03:44.998: INFO: Pod "client-containers-f334e977-9a91-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010103473s May 20 12:03:47.001: INFO: Pod "client-containers-f334e977-9a91-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013723329s STEP: Saw pod success May 20 12:03:47.001: INFO: Pod "client-containers-f334e977-9a91-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:03:47.004: INFO: Trying to get logs from node hunter-worker pod client-containers-f334e977-9a91-11ea-b520-0242ac110018 container test-container: STEP: delete the pod May 20 12:03:47.025: INFO: Waiting for pod client-containers-f334e977-9a91-11ea-b520-0242ac110018 to disappear May 20 12:03:47.042: INFO: Pod client-containers-f334e977-9a91-11ea-b520-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:03:47.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-9779b" for this suite. May 20 12:03:53.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:03:53.112: INFO: namespace: e2e-tests-containers-9779b, resource: bindings, ignored listing per whitelist May 20 12:03:53.147: INFO: namespace e2e-tests-containers-9779b deletion completed in 6.101867353s • [SLOW TEST:10.284 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:03:53.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-f95ae20c-9a91-11ea-b520-0242ac110018 STEP: Creating a pod to test consume secrets May 20 12:03:53.277: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f95bcd7f-9a91-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-mz88n" to be "success or failure" May 20 12:03:53.292: INFO: Pod "pod-projected-secrets-f95bcd7f-9a91-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.782839ms May 20 12:03:55.296: INFO: Pod "pod-projected-secrets-f95bcd7f-9a91-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019228107s May 20 12:03:57.300: INFO: Pod "pod-projected-secrets-f95bcd7f-9a91-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023065062s STEP: Saw pod success May 20 12:03:57.300: INFO: Pod "pod-projected-secrets-f95bcd7f-9a91-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:03:57.302: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-f95bcd7f-9a91-11ea-b520-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 20 12:03:57.405: INFO: Waiting for pod pod-projected-secrets-f95bcd7f-9a91-11ea-b520-0242ac110018 to disappear May 20 12:03:57.644: INFO: Pod pod-projected-secrets-f95bcd7f-9a91-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:03:57.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mz88n" for this suite. May 20 12:04:03.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:04:03.744: INFO: namespace: e2e-tests-projected-mz88n, resource: bindings, ignored listing per whitelist May 20 12:04:03.843: INFO: namespace e2e-tests-projected-mz88n deletion completed in 6.193174133s • [SLOW TEST:10.695 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:04:03.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc May 20 12:04:03.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vjmr5' May 20 12:04:04.215: INFO: stderr: "" May 20 12:04:04.215: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. May 20 12:04:05.219: INFO: Selector matched 1 pods for map[app:redis] May 20 12:04:05.219: INFO: Found 0 / 1 May 20 12:04:06.220: INFO: Selector matched 1 pods for map[app:redis] May 20 12:04:06.220: INFO: Found 0 / 1 May 20 12:04:07.220: INFO: Selector matched 1 pods for map[app:redis] May 20 12:04:07.220: INFO: Found 0 / 1 May 20 12:04:08.219: INFO: Selector matched 1 pods for map[app:redis] May 20 12:04:08.219: INFO: Found 1 / 1 May 20 12:04:08.220: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 20 12:04:08.223: INFO: Selector matched 1 pods for map[app:redis] May 20 12:04:08.223: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 20 12:04:08.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dl79m redis-master --namespace=e2e-tests-kubectl-vjmr5' May 20 12:04:08.342: INFO: stderr: "" May 20 12:04:08.342: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 May 12:04:07.334 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 May 12:04:07.335 # Server started, Redis version 3.2.12\n1:M 20 May 12:04:07.335 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 May 12:04:07.335 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 20 12:04:08.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-dl79m redis-master --namespace=e2e-tests-kubectl-vjmr5 --tail=1' May 20 12:04:08.465: INFO: stderr: "" May 20 12:04:08.465: INFO: stdout: "1:M 20 May 12:04:07.335 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 20 12:04:08.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-dl79m redis-master --namespace=e2e-tests-kubectl-vjmr5 --limit-bytes=1' May 20 12:04:08.565: INFO: stderr: "" May 20 12:04:08.565: INFO: stdout: " " STEP: exposing timestamps May 20 12:04:08.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-dl79m redis-master --namespace=e2e-tests-kubectl-vjmr5 --tail=1 --timestamps' May 20 12:04:08.690: INFO: stderr: "" May 20 12:04:08.690: INFO: stdout: "2020-05-20T12:04:07.335453056Z 1:M 20 May 12:04:07.335 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 20 12:04:11.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-dl79m redis-master --namespace=e2e-tests-kubectl-vjmr5 --since=1s' May 20 12:04:11.301: INFO: stderr: "" May 20 12:04:11.301: INFO: stdout: "" May 20 12:04:11.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-dl79m redis-master --namespace=e2e-tests-kubectl-vjmr5 --since=24h' May 20 12:04:11.414: INFO: stderr: "" May 20 12:04:11.414: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 May 12:04:07.334 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 May 12:04:07.335 # Server started, Redis version 3.2.12\n1:M 20 May 12:04:07.335 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 May 12:04:07.335 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources May 20 12:04:11.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-vjmr5' May 20 12:04:11.524: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 12:04:11.524: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 20 12:04:11.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-vjmr5' May 20 12:04:11.626: INFO: stderr: "No resources found.\n" May 20 12:04:11.626: INFO: stdout: "" May 20 12:04:11.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-vjmr5 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 20 12:04:11.731: INFO: stderr: "" May 20 12:04:11.731: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:04:11.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vjmr5" for this suite. May 20 12:04:17.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:04:18.011: INFO: namespace: e2e-tests-kubectl-vjmr5, resource: bindings, ignored listing per whitelist May 20 12:04:18.078: INFO: namespace e2e-tests-kubectl-vjmr5 deletion completed in 6.342992678s • [SLOW TEST:14.235 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:04:18.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 20 12:04:18.206: INFO: Waiting up to 5m0s for pod "downward-api-08361f26-9a92-11ea-b520-0242ac110018" in namespace "e2e-tests-downward-api-4tc2r" to be "success or failure" May 20 12:04:18.255: INFO: Pod "downward-api-08361f26-9a92-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 48.732064ms May 20 12:04:20.259: INFO: Pod "downward-api-08361f26-9a92-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052958186s May 20 12:04:22.264: INFO: Pod "downward-api-08361f26-9a92-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057350605s STEP: Saw pod success May 20 12:04:22.264: INFO: Pod "downward-api-08361f26-9a92-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:04:22.267: INFO: Trying to get logs from node hunter-worker2 pod downward-api-08361f26-9a92-11ea-b520-0242ac110018 container dapi-container: STEP: delete the pod May 20 12:04:22.289: INFO: Waiting for pod downward-api-08361f26-9a92-11ea-b520-0242ac110018 to disappear May 20 12:04:22.294: INFO: Pod downward-api-08361f26-9a92-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:04:22.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4tc2r" for this suite. May 20 12:04:28.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:04:28.539: INFO: namespace: e2e-tests-downward-api-4tc2r, resource: bindings, ignored listing per whitelist May 20 12:04:28.611: INFO: namespace e2e-tests-downward-api-4tc2r deletion completed in 6.314641838s • [SLOW TEST:10.533 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:04:28.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-0e7ee204-9a92-11ea-b520-0242ac110018 STEP: Creating a pod to test consume configMaps May 20 12:04:28.752: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0e7f92c0-9a92-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-mt7ss" to be "success or failure" May 20 12:04:28.763: INFO: Pod "pod-projected-configmaps-0e7f92c0-9a92-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.409661ms May 20 12:04:30.767: INFO: Pod "pod-projected-configmaps-0e7f92c0-9a92-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014763388s May 20 12:04:32.771: INFO: Pod "pod-projected-configmaps-0e7f92c0-9a92-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0189242s STEP: Saw pod success May 20 12:04:32.771: INFO: Pod "pod-projected-configmaps-0e7f92c0-9a92-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:04:32.775: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-0e7f92c0-9a92-11ea-b520-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 20 12:04:32.792: INFO: Waiting for pod pod-projected-configmaps-0e7f92c0-9a92-11ea-b520-0242ac110018 to disappear May 20 12:04:32.796: INFO: Pod pod-projected-configmaps-0e7f92c0-9a92-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:04:32.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mt7ss" for this suite. May 20 12:04:38.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:04:38.941: INFO: namespace: e2e-tests-projected-mt7ss, resource: bindings, ignored listing per whitelist May 20 12:04:38.957: INFO: namespace e2e-tests-projected-mt7ss deletion completed in 6.137231798s • [SLOW TEST:10.346 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:04:38.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 20 12:04:39.121: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 12:04:39.149: INFO: Number of nodes with available pods: 0 May 20 12:04:39.149: INFO: Node hunter-worker is running more than one daemon pod May 20 12:04:40.154: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 12:04:40.157: INFO: Number of nodes with available pods: 0 May 20 12:04:40.157: INFO: Node hunter-worker is running more than one daemon pod May 20 12:04:41.154: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 12:04:41.156: INFO: Number of nodes with available pods: 0 May 20 12:04:41.156: INFO: Node hunter-worker is running more than one daemon pod May 20 12:04:42.157: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 12:04:42.160: INFO: Number of nodes with available pods: 0 May 20 12:04:42.160: INFO: Node hunter-worker is running more than one daemon pod May 20 12:04:43.154: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 12:04:43.160: INFO: Number of nodes with available pods: 1 May 20 12:04:43.160: INFO: Node hunter-worker2 is running more than one daemon pod May 20 12:04:44.152: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 12:04:44.155: INFO: Number of nodes with available pods: 2 May 20 12:04:44.155: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 20 12:04:44.221: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 12:04:44.229: INFO: Number of nodes with available pods: 1 May 20 12:04:44.229: INFO: Node hunter-worker2 is running more than one daemon pod May 20 12:04:45.234: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 12:04:45.237: INFO: Number of nodes with available pods: 1 May 20 12:04:45.237: INFO: Node hunter-worker2 is running more than one daemon pod May 20 12:04:46.302: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 12:04:46.306: INFO: Number of nodes with available pods: 1 May 20 12:04:46.306: INFO: Node hunter-worker2 is running more than one daemon pod May 20 12:04:47.251: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 12:04:47.254: INFO: Number of nodes with available pods: 1 May 20 12:04:47.254: INFO: Node hunter-worker2 is running more than one daemon pod May 20 12:04:48.256: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 12:04:48.305: INFO: Number of nodes with available pods: 1 May 20 12:04:48.305: INFO: Node hunter-worker2 is running more than one daemon pod May 20 12:04:49.234: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 12:04:49.236: INFO: Number of nodes with available pods: 1 May 20 12:04:49.236: INFO: Node hunter-worker2 is running more than one daemon pod May 20 12:04:50.575: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 12:04:50.819: INFO: Number of nodes with available pods: 1 May 20 12:04:50.819: INFO: Node hunter-worker2 is running more than one daemon pod May 20 12:04:51.317: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 12:04:51.320: INFO: Number of nodes with available pods: 1 May 20 12:04:51.320: INFO: Node hunter-worker2 is running more than one daemon pod May 20 12:04:52.238: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 12:04:52.242: INFO: Number of nodes with available pods: 2 May 20 12:04:52.242: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-mfrnl, will wait for the garbage collector to delete the pods May 20 12:04:52.427: INFO: Deleting DaemonSet.extensions daemon-set took: 110.056758ms May 20 12:04:52.527: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.257825ms May 20 12:05:01.831: INFO: Number of nodes with available pods: 0 May 20 12:05:01.831: INFO: Number of running nodes: 0, number of available pods: 0 May 20 12:05:01.834: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-mfrnl/daemonsets","resourceVersion":"11573651"},"items":null} May 20 12:05:01.836: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-mfrnl/pods","resourceVersion":"11573651"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:05:01.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-mfrnl" for this suite. May 20 12:05:07.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:05:07.948: INFO: namespace: e2e-tests-daemonsets-mfrnl, resource: bindings, ignored listing per whitelist May 20 12:05:07.963: INFO: namespace e2e-tests-daemonsets-mfrnl deletion completed in 6.111537216s • [SLOW TEST:29.006 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:05:07.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-59wb STEP: Creating a pod to test atomic-volume-subpath May 20 12:05:08.140: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-59wb" in namespace "e2e-tests-subpath-lcp27" to be "success or failure" May 20 12:05:08.158: INFO: Pod "pod-subpath-test-projected-59wb": Phase="Pending", Reason="", readiness=false. Elapsed: 17.972632ms May 20 12:05:10.162: INFO: Pod "pod-subpath-test-projected-59wb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021650203s May 20 12:05:12.166: INFO: Pod "pod-subpath-test-projected-59wb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025959238s May 20 12:05:14.170: INFO: Pod "pod-subpath-test-projected-59wb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029129465s May 20 12:05:16.174: INFO: Pod "pod-subpath-test-projected-59wb": Phase="Running", Reason="", readiness=false. Elapsed: 8.033318722s May 20 12:05:18.179: INFO: Pod "pod-subpath-test-projected-59wb": Phase="Running", Reason="", readiness=false. Elapsed: 10.038371499s May 20 12:05:20.184: INFO: Pod "pod-subpath-test-projected-59wb": Phase="Running", Reason="", readiness=false. Elapsed: 12.043062824s May 20 12:05:22.188: INFO: Pod "pod-subpath-test-projected-59wb": Phase="Running", Reason="", readiness=false. Elapsed: 14.047241594s May 20 12:05:24.192: INFO: Pod "pod-subpath-test-projected-59wb": Phase="Running", Reason="", readiness=false. Elapsed: 16.051978542s May 20 12:05:26.197: INFO: Pod "pod-subpath-test-projected-59wb": Phase="Running", Reason="", readiness=false. Elapsed: 18.05635113s May 20 12:05:28.201: INFO: Pod "pod-subpath-test-projected-59wb": Phase="Running", Reason="", readiness=false. Elapsed: 20.060651205s May 20 12:05:30.206: INFO: Pod "pod-subpath-test-projected-59wb": Phase="Running", Reason="", readiness=false. Elapsed: 22.065456811s May 20 12:05:32.210: INFO: Pod "pod-subpath-test-projected-59wb": Phase="Running", Reason="", readiness=false. Elapsed: 24.069482026s May 20 12:05:34.215: INFO: Pod "pod-subpath-test-projected-59wb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.07410521s STEP: Saw pod success May 20 12:05:34.215: INFO: Pod "pod-subpath-test-projected-59wb" satisfied condition "success or failure" May 20 12:05:34.218: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-projected-59wb container test-container-subpath-projected-59wb: STEP: delete the pod May 20 12:05:34.282: INFO: Waiting for pod pod-subpath-test-projected-59wb to disappear May 20 12:05:34.294: INFO: Pod pod-subpath-test-projected-59wb no longer exists STEP: Deleting pod pod-subpath-test-projected-59wb May 20 12:05:34.294: INFO: Deleting pod "pod-subpath-test-projected-59wb" in namespace "e2e-tests-subpath-lcp27" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:05:34.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-lcp27" for this suite. May 20 12:05:40.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:05:40.382: INFO: namespace: e2e-tests-subpath-lcp27, resource: bindings, ignored listing per whitelist May 20 12:05:40.407: INFO: namespace e2e-tests-subpath-lcp27 deletion completed in 6.108202345s • [SLOW TEST:32.444 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:05:40.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-76z7b/configmap-test-394458f3-9a92-11ea-b520-0242ac110018 STEP: Creating a pod to test consume configMaps May 20 12:05:40.531: INFO: Waiting up to 5m0s for pod "pod-configmaps-3949c956-9a92-11ea-b520-0242ac110018" in namespace "e2e-tests-configmap-76z7b" to be "success or failure" May 20 12:05:40.536: INFO: Pod "pod-configmaps-3949c956-9a92-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.371435ms May 20 12:05:42.540: INFO: Pod "pod-configmaps-3949c956-9a92-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008422375s May 20 12:05:44.544: INFO: Pod "pod-configmaps-3949c956-9a92-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012784758s STEP: Saw pod success May 20 12:05:44.544: INFO: Pod "pod-configmaps-3949c956-9a92-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:05:44.546: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-3949c956-9a92-11ea-b520-0242ac110018 container env-test: STEP: delete the pod May 20 12:05:44.683: INFO: Waiting for pod pod-configmaps-3949c956-9a92-11ea-b520-0242ac110018 to disappear May 20 12:05:44.770: INFO: Pod pod-configmaps-3949c956-9a92-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:05:44.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-76z7b" for this suite. May 20 12:05:50.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:05:50.861: INFO: namespace: e2e-tests-configmap-76z7b, resource: bindings, ignored listing per whitelist May 20 12:05:50.888: INFO: namespace e2e-tests-configmap-76z7b deletion completed in 6.114386383s • [SLOW TEST:10.480 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:05:50.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:05:50.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-qtpcx" for this suite. May 20 12:05:57.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:05:57.016: INFO: namespace: e2e-tests-services-qtpcx, resource: bindings, ignored listing per whitelist May 20 12:05:57.924: INFO: namespace e2e-tests-services-qtpcx deletion completed in 6.930326145s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:7.036 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:05:57.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 20 12:05:58.456: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 May 20 12:05:58.464: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-8pxr8/daemonsets","resourceVersion":"11573865"},"items":null} May 20 12:05:58.466: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-8pxr8/pods","resourceVersion":"11573865"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:05:58.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-8pxr8" for this suite. May 20 12:06:04.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:06:04.536: INFO: namespace: e2e-tests-daemonsets-8pxr8, resource: bindings, ignored listing per whitelist May 20 12:06:04.578: INFO: namespace e2e-tests-daemonsets-8pxr8 deletion completed in 6.101918282s S [SKIPPING] [6.653 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 20 12:05:58.456: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:06:04.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 20 12:06:12.761: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 12:06:12.838: INFO: Pod pod-with-poststart-exec-hook still exists May 20 12:06:14.838: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 12:06:14.841: INFO: Pod pod-with-poststart-exec-hook still exists May 20 12:06:16.838: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 12:06:16.843: INFO: Pod pod-with-poststart-exec-hook still exists May 20 12:06:18.838: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 12:06:18.843: INFO: Pod pod-with-poststart-exec-hook still exists May 20 12:06:20.838: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 12:06:20.842: INFO: Pod pod-with-poststart-exec-hook still exists May 20 12:06:22.838: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 12:06:22.842: INFO: Pod pod-with-poststart-exec-hook still exists May 20 12:06:24.838: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 12:06:24.841: INFO: Pod pod-with-poststart-exec-hook still exists May 20 12:06:26.838: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 12:06:26.842: INFO: Pod pod-with-poststart-exec-hook still exists May 20 12:06:28.838: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 12:06:28.842: INFO: Pod pod-with-poststart-exec-hook still exists May 20 12:06:30.838: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 12:06:30.843: INFO: Pod pod-with-poststart-exec-hook still exists May 20 12:06:32.838: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 12:06:32.843: INFO: Pod pod-with-poststart-exec-hook still exists May 20 12:06:34.838: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 12:06:34.841: INFO: Pod pod-with-poststart-exec-hook still exists May 20 12:06:36.838: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 12:06:36.843: INFO: Pod pod-with-poststart-exec-hook still exists May 20 12:06:38.838: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 12:06:38.923: INFO: Pod pod-with-poststart-exec-hook still exists May 20 12:06:40.838: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 12:06:40.843: INFO: Pod pod-with-poststart-exec-hook still exists May 20 12:06:42.838: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 20 12:06:42.843: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:06:42.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-swrbk" for this suite. May 20 12:07:04.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:07:04.871: INFO: namespace: e2e-tests-container-lifecycle-hook-swrbk, resource: bindings, ignored listing per whitelist May 20 12:07:04.930: INFO: namespace e2e-tests-container-lifecycle-hook-swrbk deletion completed in 22.084732593s • [SLOW TEST:60.352 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:07:04.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 20 12:07:05.393: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 20 12:07:05.450: INFO: Pod name sample-pod: Found 0 pods out of 1 May 20 12:07:10.455: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 20 12:07:10.455: INFO: Creating deployment "test-rolling-update-deployment" May 20 12:07:10.460: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 20 12:07:10.469: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 20 12:07:12.477: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 20 12:07:12.479: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725573230, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725573230, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725573230, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725573230, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 12:07:14.483: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 20 12:07:14.492: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-sh6k9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sh6k9/deployments/test-rolling-update-deployment,UID:6ee41b25-9a92-11ea-99e8-0242ac110002,ResourceVersion:11574101,Generation:1,CreationTimestamp:2020-05-20 12:07:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-20 12:07:10 +0000 UTC 2020-05-20 12:07:10 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-20 12:07:13 +0000 UTC 2020-05-20 12:07:10 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 20 12:07:14.496: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-sh6k9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sh6k9/replicasets/test-rolling-update-deployment-75db98fb4c,UID:6ee6f232-9a92-11ea-99e8-0242ac110002,ResourceVersion:11574092,Generation:1,CreationTimestamp:2020-05-20 12:07:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 6ee41b25-9a92-11ea-99e8-0242ac110002 0xc001d68a57 0xc001d68a58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 20 12:07:14.496: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 20 12:07:14.496: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-sh6k9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sh6k9/replicasets/test-rolling-update-controller,UID:6bdfb977-9a92-11ea-99e8-0242ac110002,ResourceVersion:11574100,Generation:2,CreationTimestamp:2020-05-20 12:07:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 6ee41b25-9a92-11ea-99e8-0242ac110002 0xc001d6897f 0xc001d68990}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 20 12:07:14.527: INFO: Pod "test-rolling-update-deployment-75db98fb4c-9twcg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-9twcg,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-sh6k9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-sh6k9/pods/test-rolling-update-deployment-75db98fb4c-9twcg,UID:6ee79225-9a92-11ea-99e8-0242ac110002,ResourceVersion:11574091,Generation:0,CreationTimestamp:2020-05-20 12:07:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 6ee6f232-9a92-11ea-99e8-0242ac110002 0xc001750937 0xc001750938}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-np79z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-np79z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-np79z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017509b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017509d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:07:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:07:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:07:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:07:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.194,StartTime:2020-05-20 12:07:10 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-20 12:07:13 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://c0c55869713e0462c7bc41a1b5a26fa6ad403b356d22e1ec3ab9ebfa01e13352}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:07:14.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-sh6k9" for this suite. May 20 12:07:20.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:07:20.628: INFO: namespace: e2e-tests-deployment-sh6k9, resource: bindings, ignored listing per whitelist May 20 12:07:20.633: INFO: namespace e2e-tests-deployment-sh6k9 deletion completed in 6.103481235s • [SLOW TEST:15.702 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:07:20.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 20 12:07:25.298: INFO: Successfully updated pod "annotationupdate7504e214-9a92-11ea-b520-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:07:29.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fd7lm" for this suite. May 20 12:07:51.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:07:51.363: INFO: namespace: e2e-tests-downward-api-fd7lm, resource: bindings, ignored listing per whitelist May 20 12:07:51.414: INFO: namespace e2e-tests-downward-api-fd7lm deletion completed in 22.07558361s • [SLOW TEST:30.781 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:07:51.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0520 12:08:01.545862 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 20 12:08:01.545: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:08:01.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-5f66l" for this suite. May 20 12:08:07.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:08:07.603: INFO: namespace: e2e-tests-gc-5f66l, resource: bindings, ignored listing per whitelist May 20 12:08:07.649: INFO: namespace e2e-tests-gc-5f66l deletion completed in 6.099669872s • [SLOW TEST:16.235 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:08:07.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-9108bbd4-9a92-11ea-b520-0242ac110018 STEP: Creating a pod to test consume secrets May 20 12:08:07.769: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-910c4226-9a92-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-4hdtx" to be "success or failure" May 20 12:08:07.786: INFO: Pod "pod-projected-secrets-910c4226-9a92-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.737771ms May 20 12:08:10.026: INFO: Pod "pod-projected-secrets-910c4226-9a92-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.256663464s May 20 12:08:12.030: INFO: Pod "pod-projected-secrets-910c4226-9a92-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.260760935s STEP: Saw pod success May 20 12:08:12.030: INFO: Pod "pod-projected-secrets-910c4226-9a92-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:08:12.033: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-910c4226-9a92-11ea-b520-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 20 12:08:12.164: INFO: Waiting for pod pod-projected-secrets-910c4226-9a92-11ea-b520-0242ac110018 to disappear May 20 12:08:12.191: INFO: Pod pod-projected-secrets-910c4226-9a92-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:08:12.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4hdtx" for this suite. May 20 12:08:18.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:08:18.247: INFO: namespace: e2e-tests-projected-4hdtx, resource: bindings, ignored listing per whitelist May 20 12:08:18.275: INFO: namespace e2e-tests-projected-4hdtx deletion completed in 6.080176607s • [SLOW TEST:10.625 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:08:18.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-9770d241-9a92-11ea-b520-0242ac110018 STEP: Creating a pod to test consume configMaps May 20 12:08:18.637: INFO: Waiting up to 5m0s for pod "pod-configmaps-97731302-9a92-11ea-b520-0242ac110018" in namespace "e2e-tests-configmap-qfwtj" to be "success or failure" May 20 12:08:18.640: INFO: Pod "pod-configmaps-97731302-9a92-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.274167ms May 20 12:08:20.673: INFO: Pod "pod-configmaps-97731302-9a92-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036660769s May 20 12:08:23.014: INFO: Pod "pod-configmaps-97731302-9a92-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.377451154s May 20 12:08:25.193: INFO: Pod "pod-configmaps-97731302-9a92-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.556795514s May 20 12:08:27.197: INFO: Pod "pod-configmaps-97731302-9a92-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.560249109s STEP: Saw pod success May 20 12:08:27.197: INFO: Pod "pod-configmaps-97731302-9a92-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:08:27.199: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-97731302-9a92-11ea-b520-0242ac110018 container configmap-volume-test: STEP: delete the pod May 20 12:08:27.485: INFO: Waiting for pod pod-configmaps-97731302-9a92-11ea-b520-0242ac110018 to disappear May 20 12:08:27.732: INFO: Pod pod-configmaps-97731302-9a92-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:08:27.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qfwtj" for this suite. May 20 12:08:34.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:08:34.774: INFO: namespace: e2e-tests-configmap-qfwtj, resource: bindings, ignored listing per whitelist May 20 12:08:34.779: INFO: namespace e2e-tests-configmap-qfwtj deletion completed in 7.043706364s • [SLOW TEST:16.505 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:08:34.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-2cxws I0520 12:08:35.343824 7 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-2cxws, replica count: 1 I0520 12:08:36.394256 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 12:08:37.394432 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 12:08:38.394617 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 12:08:39.394840 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 12:08:40.395102 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 12:08:41.395296 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 12:08:42.395465 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 12:08:42.625: INFO: Created: latency-svc-p48mp May 20 12:08:42.714: INFO: Got endpoints: latency-svc-p48mp [218.329381ms] May 20 12:08:42.873: INFO: Created: latency-svc-76pnt May 20 12:08:42.966: INFO: Got endpoints: latency-svc-76pnt [251.958601ms] May 20 12:08:42.976: INFO: Created: latency-svc-rhm6j May 20 12:08:43.047: INFO: Got endpoints: latency-svc-rhm6j [333.707324ms] May 20 12:08:43.152: INFO: Created: latency-svc-vd8cq May 20 12:08:43.159: INFO: Got endpoints: latency-svc-vd8cq [445.349262ms] May 20 12:08:43.193: INFO: Created: latency-svc-876zk May 20 12:08:43.243: INFO: Got endpoints: latency-svc-876zk [529.256582ms] May 20 12:08:43.392: INFO: Created: latency-svc-gcfsf May 20 12:08:43.442: INFO: Got endpoints: latency-svc-gcfsf [728.100712ms] May 20 12:08:43.712: INFO: Created: latency-svc-kt2q2 May 20 12:08:43.919: INFO: Got endpoints: latency-svc-kt2q2 [1.205319535s] May 20 12:08:44.133: INFO: Created: latency-svc-7cp97 May 20 12:08:44.361: INFO: Got endpoints: latency-svc-7cp97 [1.647539893s] May 20 12:08:44.375: INFO: Created: latency-svc-r6f6c May 20 12:08:44.410: INFO: Got endpoints: latency-svc-r6f6c [1.696299738s] May 20 12:08:44.556: INFO: Created: latency-svc-8j5rx May 20 12:08:44.566: INFO: Got endpoints: latency-svc-8j5rx [1.852021401s] May 20 12:08:44.708: INFO: Created: latency-svc-d7px5 May 20 12:08:44.722: INFO: Got endpoints: latency-svc-d7px5 [2.008282994s] May 20 12:08:44.749: INFO: Created: latency-svc-jgmrx May 20 12:08:44.760: INFO: Got endpoints: latency-svc-jgmrx [2.045967551s] May 20 12:08:44.882: INFO: Created: latency-svc-rp9lv May 20 12:08:44.886: INFO: Got endpoints: latency-svc-rp9lv [2.171599827s] May 20 12:08:44.949: INFO: Created: latency-svc-qmk2j May 20 12:08:44.975: INFO: Got endpoints: latency-svc-qmk2j [2.260756463s] May 20 12:08:45.044: INFO: Created: latency-svc-2qkft May 20 12:08:45.085: INFO: Got endpoints: latency-svc-2qkft [2.370958904s] May 20 12:08:45.618: INFO: Created: latency-svc-gkvk7 May 20 12:08:45.648: INFO: Got endpoints: latency-svc-gkvk7 [2.934359942s] May 20 12:08:46.260: INFO: Created: latency-svc-6s77l May 20 12:08:46.264: INFO: Got endpoints: latency-svc-6s77l [3.298343519s] May 20 12:08:46.823: INFO: Created: latency-svc-wp7q5 May 20 12:08:46.838: INFO: Got endpoints: latency-svc-wp7q5 [3.790863674s] May 20 12:08:47.481: INFO: Created: latency-svc-bcbp8 May 20 12:08:47.506: INFO: Got endpoints: latency-svc-bcbp8 [4.346871636s] May 20 12:08:47.506: INFO: Created: latency-svc-nwlm5 May 20 12:08:47.518: INFO: Got endpoints: latency-svc-nwlm5 [4.274856768s] May 20 12:08:47.550: INFO: Created: latency-svc-pxdzd May 20 12:08:47.560: INFO: Got endpoints: latency-svc-pxdzd [4.118131504s] May 20 12:08:47.578: INFO: Created: latency-svc-mxwhp May 20 12:08:47.624: INFO: Got endpoints: latency-svc-mxwhp [3.705248424s] May 20 12:08:47.631: INFO: Created: latency-svc-m7bq5 May 20 12:08:47.651: INFO: Got endpoints: latency-svc-m7bq5 [3.289562252s] May 20 12:08:47.674: INFO: Created: latency-svc-7qxwf May 20 12:08:47.687: INFO: Got endpoints: latency-svc-7qxwf [3.276884467s] May 20 12:08:47.706: INFO: Created: latency-svc-7q5fc May 20 12:08:47.718: INFO: Got endpoints: latency-svc-7q5fc [3.152401933s] May 20 12:08:48.309: INFO: Created: latency-svc-vtwkw May 20 12:08:48.323: INFO: Got endpoints: latency-svc-vtwkw [3.600909197s] May 20 12:08:48.915: INFO: Created: latency-svc-vwgnh May 20 12:08:48.929: INFO: Got endpoints: latency-svc-vwgnh [4.169120608s] May 20 12:08:49.453: INFO: Created: latency-svc-ztj92 May 20 12:08:49.462: INFO: Got endpoints: latency-svc-ztj92 [4.576718189s] May 20 12:08:49.478: INFO: Created: latency-svc-z6j5k May 20 12:08:49.522: INFO: Got endpoints: latency-svc-z6j5k [4.547428517s] May 20 12:08:49.539: INFO: Created: latency-svc-c5t6r May 20 12:08:49.565: INFO: Got endpoints: latency-svc-c5t6r [4.480043816s] May 20 12:08:50.147: INFO: Created: latency-svc-g7clc May 20 12:08:50.155: INFO: Got endpoints: latency-svc-g7clc [4.506235762s] May 20 12:08:50.707: INFO: Created: latency-svc-f24tt May 20 12:08:50.720: INFO: Got endpoints: latency-svc-f24tt [4.456009765s] May 20 12:08:51.320: INFO: Created: latency-svc-8njkj May 20 12:08:51.322: INFO: Got endpoints: latency-svc-8njkj [4.484043351s] May 20 12:08:51.913: INFO: Created: latency-svc-gtcfh May 20 12:08:51.959: INFO: Got endpoints: latency-svc-gtcfh [4.453370877s] May 20 12:08:51.972: INFO: Created: latency-svc-r4bvr May 20 12:08:51.986: INFO: Got endpoints: latency-svc-r4bvr [4.467578418s] May 20 12:08:52.015: INFO: Created: latency-svc-77sfj May 20 12:08:52.027: INFO: Got endpoints: latency-svc-77sfj [4.466694916s] May 20 12:08:52.050: INFO: Created: latency-svc-4khqr May 20 12:08:52.127: INFO: Got endpoints: latency-svc-4khqr [4.50272499s] May 20 12:08:52.148: INFO: Created: latency-svc-9kfft May 20 12:08:52.166: INFO: Got endpoints: latency-svc-9kfft [4.514706328s] May 20 12:08:52.278: INFO: Created: latency-svc-8pndz May 20 12:08:52.292: INFO: Got endpoints: latency-svc-8pndz [4.604983789s] May 20 12:08:52.315: INFO: Created: latency-svc-6q9np May 20 12:08:52.328: INFO: Got endpoints: latency-svc-6q9np [4.60974338s] May 20 12:08:52.345: INFO: Created: latency-svc-b4vlt May 20 12:08:52.358: INFO: Got endpoints: latency-svc-b4vlt [4.035202097s] May 20 12:08:52.374: INFO: Created: latency-svc-6s9l2 May 20 12:08:52.421: INFO: Got endpoints: latency-svc-6s9l2 [3.492192632s] May 20 12:08:52.927: INFO: Created: latency-svc-ht9tv May 20 12:08:52.944: INFO: Got endpoints: latency-svc-ht9tv [3.481723744s] May 20 12:08:53.565: INFO: Created: latency-svc-6w68h May 20 12:08:53.567: INFO: Got endpoints: latency-svc-6w68h [4.045085517s] May 20 12:08:53.597: INFO: Created: latency-svc-c48kh May 20 12:08:53.611: INFO: Got endpoints: latency-svc-c48kh [4.046401435s] May 20 12:08:53.657: INFO: Created: latency-svc-zv7w5 May 20 12:08:53.708: INFO: Got endpoints: latency-svc-zv7w5 [3.55340862s] May 20 12:08:53.724: INFO: Created: latency-svc-k59cf May 20 12:08:53.738: INFO: Got endpoints: latency-svc-k59cf [3.017761395s] May 20 12:08:54.317: INFO: Created: latency-svc-c4r4s May 20 12:08:54.325: INFO: Got endpoints: latency-svc-c4r4s [3.002537853s] May 20 12:08:54.346: INFO: Created: latency-svc-4m74q May 20 12:08:54.397: INFO: Got endpoints: latency-svc-4m74q [2.43744706s] May 20 12:08:54.922: INFO: Created: latency-svc-7pgtg May 20 12:08:54.937: INFO: Got endpoints: latency-svc-7pgtg [2.950863967s] May 20 12:08:54.959: INFO: Created: latency-svc-k4l9r May 20 12:08:54.973: INFO: Got endpoints: latency-svc-k4l9r [2.946351823s] May 20 12:08:55.002: INFO: Created: latency-svc-qppn2 May 20 12:08:55.067: INFO: Got endpoints: latency-svc-qppn2 [2.940217055s] May 20 12:08:55.075: INFO: Created: latency-svc-48bqd May 20 12:08:55.082: INFO: Got endpoints: latency-svc-48bqd [2.915877768s] May 20 12:08:55.126: INFO: Created: latency-svc-4tf64 May 20 12:08:55.148: INFO: Got endpoints: latency-svc-4tf64 [2.855648972s] May 20 12:08:55.221: INFO: Created: latency-svc-qm8hj May 20 12:08:55.230: INFO: Got endpoints: latency-svc-qm8hj [2.901732211s] May 20 12:08:55.258: INFO: Created: latency-svc-k5jjf May 20 12:08:55.285: INFO: Got endpoints: latency-svc-k5jjf [2.926645489s] May 20 12:08:55.361: INFO: Created: latency-svc-zwxzg May 20 12:08:55.364: INFO: Got endpoints: latency-svc-zwxzg [2.942786711s] May 20 12:08:55.396: INFO: Created: latency-svc-rr7l8 May 20 12:08:55.418: INFO: Got endpoints: latency-svc-rr7l8 [2.473635492s] May 20 12:08:55.450: INFO: Created: latency-svc-7xmr8 May 20 12:08:55.460: INFO: Got endpoints: latency-svc-7xmr8 [1.892230937s] May 20 12:08:55.523: INFO: Created: latency-svc-6zxs9 May 20 12:08:55.526: INFO: Got endpoints: latency-svc-6zxs9 [1.914479204s] May 20 12:08:55.553: INFO: Created: latency-svc-4blgk May 20 12:08:55.563: INFO: Got endpoints: latency-svc-4blgk [1.855276761s] May 20 12:08:55.593: INFO: Created: latency-svc-2m4jd May 20 12:08:55.605: INFO: Got endpoints: latency-svc-2m4jd [1.866904132s] May 20 12:08:55.703: INFO: Created: latency-svc-mnpds May 20 12:08:55.737: INFO: Got endpoints: latency-svc-mnpds [1.411955247s] May 20 12:08:55.738: INFO: Created: latency-svc-thlhq May 20 12:08:55.762: INFO: Got endpoints: latency-svc-thlhq [1.365115474s] May 20 12:08:55.780: INFO: Created: latency-svc-2dxbz May 20 12:08:55.792: INFO: Got endpoints: latency-svc-2dxbz [855.020207ms] May 20 12:08:55.842: INFO: Created: latency-svc-txbwk May 20 12:08:55.871: INFO: Got endpoints: latency-svc-txbwk [897.18889ms] May 20 12:08:56.002: INFO: Created: latency-svc-q9lxs May 20 12:08:56.006: INFO: Got endpoints: latency-svc-q9lxs [938.613483ms] May 20 12:08:56.039: INFO: Created: latency-svc-kbkpc May 20 12:08:56.086: INFO: Got endpoints: latency-svc-kbkpc [1.004628088s] May 20 12:08:56.164: INFO: Created: latency-svc-hc755 May 20 12:08:56.167: INFO: Got endpoints: latency-svc-hc755 [1.019089758s] May 20 12:08:56.214: INFO: Created: latency-svc-lj9qf May 20 12:08:56.338: INFO: Got endpoints: latency-svc-lj9qf [1.107882262s] May 20 12:08:56.364: INFO: Created: latency-svc-h5pzd May 20 12:08:56.369: INFO: Got endpoints: latency-svc-h5pzd [1.083922187s] May 20 12:08:56.411: INFO: Created: latency-svc-p9blh May 20 12:08:56.435: INFO: Got endpoints: latency-svc-p9blh [1.070350355s] May 20 12:08:56.519: INFO: Created: latency-svc-xg9l9 May 20 12:08:56.532: INFO: Got endpoints: latency-svc-xg9l9 [1.113793685s] May 20 12:08:56.548: INFO: Created: latency-svc-76tmp May 20 12:08:56.574: INFO: Got endpoints: latency-svc-76tmp [1.113854204s] May 20 12:08:56.598: INFO: Created: latency-svc-cpfp5 May 20 12:08:56.703: INFO: Got endpoints: latency-svc-cpfp5 [1.176585254s] May 20 12:08:56.729: INFO: Created: latency-svc-5wz8z May 20 12:08:56.894: INFO: Got endpoints: latency-svc-5wz8z [1.330723436s] May 20 12:08:56.956: INFO: Created: latency-svc-xrfv5 May 20 12:08:56.970: INFO: Got endpoints: latency-svc-xrfv5 [1.365456827s] May 20 12:08:57.068: INFO: Created: latency-svc-zk8nm May 20 12:08:57.071: INFO: Got endpoints: latency-svc-zk8nm [1.334035508s] May 20 12:08:57.100: INFO: Created: latency-svc-h6p6t May 20 12:08:57.103: INFO: Got endpoints: latency-svc-h6p6t [1.340675272s] May 20 12:08:57.144: INFO: Created: latency-svc-dqqgn May 20 12:08:57.158: INFO: Got endpoints: latency-svc-dqqgn [1.366044815s] May 20 12:08:57.242: INFO: Created: latency-svc-55ljs May 20 12:08:57.244: INFO: Got endpoints: latency-svc-55ljs [1.373687837s] May 20 12:08:57.276: INFO: Created: latency-svc-j7n6n May 20 12:08:57.290: INFO: Got endpoints: latency-svc-j7n6n [1.283800362s] May 20 12:08:57.312: INFO: Created: latency-svc-h9gvf May 20 12:08:57.427: INFO: Got endpoints: latency-svc-h9gvf [1.340518802s] May 20 12:08:57.448: INFO: Created: latency-svc-5phgz May 20 12:08:57.476: INFO: Got endpoints: latency-svc-5phgz [1.309050476s] May 20 12:08:57.503: INFO: Created: latency-svc-mt5hh May 20 12:08:57.571: INFO: Got endpoints: latency-svc-mt5hh [1.232633282s] May 20 12:08:57.594: INFO: Created: latency-svc-4qmhz May 20 12:08:57.628: INFO: Got endpoints: latency-svc-4qmhz [1.259216255s] May 20 12:08:57.714: INFO: Created: latency-svc-b4h2n May 20 12:08:57.717: INFO: Got endpoints: latency-svc-b4h2n [1.281935516s] May 20 12:08:57.744: INFO: Created: latency-svc-tnnlv May 20 12:08:57.759: INFO: Got endpoints: latency-svc-tnnlv [1.227501471s] May 20 12:08:57.780: INFO: Created: latency-svc-xsw48 May 20 12:08:57.789: INFO: Got endpoints: latency-svc-xsw48 [1.215848703s] May 20 12:08:57.808: INFO: Created: latency-svc-lnrsh May 20 12:08:57.870: INFO: Got endpoints: latency-svc-lnrsh [1.167311445s] May 20 12:08:57.872: INFO: Created: latency-svc-dg7mn May 20 12:08:57.899: INFO: Got endpoints: latency-svc-dg7mn [1.004955206s] May 20 12:08:57.900: INFO: Created: latency-svc-4dsn6 May 20 12:08:57.917: INFO: Got endpoints: latency-svc-4dsn6 [947.044192ms] May 20 12:08:57.942: INFO: Created: latency-svc-hbjj2 May 20 12:08:57.953: INFO: Got endpoints: latency-svc-hbjj2 [881.783051ms] May 20 12:08:58.003: INFO: Created: latency-svc-l7t5w May 20 12:08:58.006: INFO: Got endpoints: latency-svc-l7t5w [902.747865ms] May 20 12:08:58.037: INFO: Created: latency-svc-gw8p6 May 20 12:08:58.158: INFO: Got endpoints: latency-svc-gw8p6 [999.745334ms] May 20 12:08:58.162: INFO: Created: latency-svc-bwwdm May 20 12:08:58.170: INFO: Got endpoints: latency-svc-bwwdm [925.24085ms] May 20 12:08:58.192: INFO: Created: latency-svc-h5926 May 20 12:08:58.200: INFO: Got endpoints: latency-svc-h5926 [909.725232ms] May 20 12:08:58.243: INFO: Created: latency-svc-lpzfq May 20 12:08:58.319: INFO: Got endpoints: latency-svc-lpzfq [892.375501ms] May 20 12:08:58.321: INFO: Created: latency-svc-m7ljz May 20 12:08:58.333: INFO: Got endpoints: latency-svc-m7ljz [856.770544ms] May 20 12:08:58.360: INFO: Created: latency-svc-xccv2 May 20 12:08:58.375: INFO: Got endpoints: latency-svc-xccv2 [804.101798ms] May 20 12:08:58.410: INFO: Created: latency-svc-xrzm7 May 20 12:08:58.457: INFO: Got endpoints: latency-svc-xrzm7 [828.493949ms] May 20 12:08:58.470: INFO: Created: latency-svc-lfdjz May 20 12:08:58.483: INFO: Got endpoints: latency-svc-lfdjz [766.707835ms] May 20 12:08:58.511: INFO: Created: latency-svc-vrq4w May 20 12:08:58.526: INFO: Got endpoints: latency-svc-vrq4w [766.862056ms] May 20 12:08:58.595: INFO: Created: latency-svc-7rj7h May 20 12:08:58.598: INFO: Got endpoints: latency-svc-7rj7h [808.555349ms] May 20 12:08:59.218: INFO: Created: latency-svc-8kcpx May 20 12:08:59.222: INFO: Got endpoints: latency-svc-8kcpx [1.351893828s] May 20 12:08:59.249: INFO: Created: latency-svc-l2pvr May 20 12:08:59.264: INFO: Got endpoints: latency-svc-l2pvr [1.364333689s] May 20 12:08:59.291: INFO: Created: latency-svc-8lt62 May 20 12:08:59.306: INFO: Got endpoints: latency-svc-8lt62 [1.388818358s] May 20 12:08:59.404: INFO: Created: latency-svc-s4gjs May 20 12:08:59.470: INFO: Got endpoints: latency-svc-s4gjs [1.517128125s] May 20 12:09:00.154: INFO: Created: latency-svc-tsvn2 May 20 12:09:00.169: INFO: Got endpoints: latency-svc-tsvn2 [2.163289514s] May 20 12:09:00.272: INFO: Created: latency-svc-jqhrs May 20 12:09:00.300: INFO: Got endpoints: latency-svc-jqhrs [2.142181428s] May 20 12:09:00.340: INFO: Created: latency-svc-6gtkr May 20 12:09:00.351: INFO: Got endpoints: latency-svc-6gtkr [2.180743751s] May 20 12:09:00.870: INFO: Created: latency-svc-g9g8x May 20 12:09:00.889: INFO: Got endpoints: latency-svc-g9g8x [2.689650758s] May 20 12:09:01.491: INFO: Created: latency-svc-glctc May 20 12:09:01.506: INFO: Got endpoints: latency-svc-glctc [3.186896125s] May 20 12:09:02.004: INFO: Created: latency-svc-56hzf May 20 12:09:02.015: INFO: Got endpoints: latency-svc-56hzf [3.682331656s] May 20 12:09:02.563: INFO: Created: latency-svc-rbgmq May 20 12:09:02.567: INFO: Got endpoints: latency-svc-rbgmq [4.192421704s] May 20 12:09:02.601: INFO: Created: latency-svc-cpvgk May 20 12:09:02.609: INFO: Got endpoints: latency-svc-cpvgk [4.152383946s] May 20 12:09:03.169: INFO: Created: latency-svc-q9pth May 20 12:09:03.182: INFO: Got endpoints: latency-svc-q9pth [4.69878646s] May 20 12:09:03.775: INFO: Created: latency-svc-q5qdm May 20 12:09:03.784: INFO: Got endpoints: latency-svc-q5qdm [5.257676603s] May 20 12:09:04.362: INFO: Created: latency-svc-gln2q May 20 12:09:04.366: INFO: Got endpoints: latency-svc-gln2q [5.768391437s] May 20 12:09:04.991: INFO: Created: latency-svc-rwqhm May 20 12:09:05.021: INFO: Got endpoints: latency-svc-rwqhm [5.798800018s] May 20 12:09:05.530: INFO: Created: latency-svc-cq8gq May 20 12:09:05.546: INFO: Got endpoints: latency-svc-cq8gq [6.282404122s] May 20 12:09:06.008: INFO: Created: latency-svc-c78xx May 20 12:09:06.034: INFO: Got endpoints: latency-svc-c78xx [6.727226118s] May 20 12:09:06.493: INFO: Created: latency-svc-2b8gf May 20 12:09:06.498: INFO: Got endpoints: latency-svc-2b8gf [7.028166744s] May 20 12:09:06.518: INFO: Created: latency-svc-6wd4n May 20 12:09:06.530: INFO: Got endpoints: latency-svc-6wd4n [6.360905384s] May 20 12:09:07.003: INFO: Created: latency-svc-llm6d May 20 12:09:07.014: INFO: Got endpoints: latency-svc-llm6d [6.71390281s] May 20 12:09:07.572: INFO: Created: latency-svc-kdcdn May 20 12:09:07.609: INFO: Got endpoints: latency-svc-kdcdn [7.25871132s] May 20 12:09:08.071: INFO: Created: latency-svc-5xmw9 May 20 12:09:08.145: INFO: Got endpoints: latency-svc-5xmw9 [7.25590001s] May 20 12:09:08.599: INFO: Created: latency-svc-6hwdk May 20 12:09:08.615: INFO: Got endpoints: latency-svc-6hwdk [7.10836537s] May 20 12:09:09.162: INFO: Created: latency-svc-c4xwv May 20 12:09:09.178: INFO: Got endpoints: latency-svc-c4xwv [7.162167865s] May 20 12:09:09.731: INFO: Created: latency-svc-llh48 May 20 12:09:09.747: INFO: Got endpoints: latency-svc-llh48 [7.179683695s] May 20 12:09:09.772: INFO: Created: latency-svc-mgfg7 May 20 12:09:09.783: INFO: Got endpoints: latency-svc-mgfg7 [7.173984408s] May 20 12:09:09.834: INFO: Created: latency-svc-g429m May 20 12:09:09.846: INFO: Got endpoints: latency-svc-g429m [6.663480057s] May 20 12:09:10.378: INFO: Created: latency-svc-q8pbt May 20 12:09:10.402: INFO: Got endpoints: latency-svc-q8pbt [6.618145738s] May 20 12:09:10.469: INFO: Created: latency-svc-rzzrc May 20 12:09:10.491: INFO: Got endpoints: latency-svc-rzzrc [6.124862949s] May 20 12:09:10.527: INFO: Created: latency-svc-w9frt May 20 12:09:10.539: INFO: Got endpoints: latency-svc-w9frt [5.518441788s] May 20 12:09:10.589: INFO: Created: latency-svc-j7l6d May 20 12:09:10.592: INFO: Got endpoints: latency-svc-j7l6d [5.045568234s] May 20 12:09:10.619: INFO: Created: latency-svc-sh4xt May 20 12:09:10.630: INFO: Got endpoints: latency-svc-sh4xt [4.596303767s] May 20 12:09:10.650: INFO: Created: latency-svc-vfj6v May 20 12:09:10.660: INFO: Got endpoints: latency-svc-vfj6v [4.161717972s] May 20 12:09:11.212: INFO: Created: latency-svc-sd24n May 20 12:09:11.265: INFO: Got endpoints: latency-svc-sd24n [4.735259975s] May 20 12:09:11.747: INFO: Created: latency-svc-dzf48 May 20 12:09:11.757: INFO: Got endpoints: latency-svc-dzf48 [4.742933239s] May 20 12:09:11.780: INFO: Created: latency-svc-scm79 May 20 12:09:11.793: INFO: Got endpoints: latency-svc-scm79 [4.18399321s] May 20 12:09:12.337: INFO: Created: latency-svc-bnkk8 May 20 12:09:12.356: INFO: Got endpoints: latency-svc-bnkk8 [4.210864517s] May 20 12:09:12.883: INFO: Created: latency-svc-lxfx5 May 20 12:09:12.908: INFO: Got endpoints: latency-svc-lxfx5 [4.292977248s] May 20 12:09:13.399: INFO: Created: latency-svc-bdjbr May 20 12:09:13.427: INFO: Got endpoints: latency-svc-bdjbr [4.24972716s] May 20 12:09:13.945: INFO: Created: latency-svc-7sxmw May 20 12:09:14.002: INFO: Got endpoints: latency-svc-7sxmw [4.254904233s] May 20 12:09:14.512: INFO: Created: latency-svc-pvvm2 May 20 12:09:14.625: INFO: Got endpoints: latency-svc-pvvm2 [4.841499377s] May 20 12:09:15.050: INFO: Created: latency-svc-h7kz4 May 20 12:09:15.064: INFO: Got endpoints: latency-svc-h7kz4 [5.218317296s] May 20 12:09:15.626: INFO: Created: latency-svc-965wb May 20 12:09:15.629: INFO: Got endpoints: latency-svc-965wb [5.227041812s] May 20 12:09:16.187: INFO: Created: latency-svc-xlxth May 20 12:09:16.247: INFO: Got endpoints: latency-svc-xlxth [5.755858421s] May 20 12:09:16.737: INFO: Created: latency-svc-h68kk May 20 12:09:16.749: INFO: Got endpoints: latency-svc-h68kk [6.209979263s] May 20 12:09:17.488: INFO: Created: latency-svc-p9bfn May 20 12:09:17.933: INFO: Got endpoints: latency-svc-p9bfn [7.341216003s] May 20 12:09:18.196: INFO: Created: latency-svc-5ztqf May 20 12:09:18.260: INFO: Got endpoints: latency-svc-5ztqf [7.629956712s] May 20 12:09:18.865: INFO: Created: latency-svc-2g7z7 May 20 12:09:18.869: INFO: Got endpoints: latency-svc-2g7z7 [8.208555494s] May 20 12:09:19.423: INFO: Created: latency-svc-4c2xw May 20 12:09:19.751: INFO: Got endpoints: latency-svc-4c2xw [8.485522099s] May 20 12:09:19.756: INFO: Created: latency-svc-jm7h6 May 20 12:09:19.810: INFO: Got endpoints: latency-svc-jm7h6 [8.05270868s] May 20 12:09:20.458: INFO: Created: latency-svc-t2wrj May 20 12:09:20.487: INFO: Got endpoints: latency-svc-t2wrj [8.693751448s] May 20 12:09:21.003: INFO: Created: latency-svc-dgzrq May 20 12:09:21.016: INFO: Got endpoints: latency-svc-dgzrq [8.659510404s] May 20 12:09:21.034: INFO: Created: latency-svc-t8gpt May 20 12:09:21.048: INFO: Got endpoints: latency-svc-t8gpt [8.140333994s] May 20 12:09:21.071: INFO: Created: latency-svc-ksr2k May 20 12:09:21.434: INFO: Got endpoints: latency-svc-ksr2k [8.006519519s] May 20 12:09:21.440: INFO: Created: latency-svc-97xxh May 20 12:09:21.465: INFO: Got endpoints: latency-svc-97xxh [7.462656499s] May 20 12:09:21.495: INFO: Created: latency-svc-rzsll May 20 12:09:21.522: INFO: Got endpoints: latency-svc-rzsll [6.896753325s] May 20 12:09:21.788: INFO: Created: latency-svc-s4k46 May 20 12:09:21.818: INFO: Got endpoints: latency-svc-s4k46 [6.753841544s] May 20 12:09:22.057: INFO: Created: latency-svc-hzvz6 May 20 12:09:22.109: INFO: Got endpoints: latency-svc-hzvz6 [6.479424493s] May 20 12:09:22.247: INFO: Created: latency-svc-rx2tr May 20 12:09:22.277: INFO: Got endpoints: latency-svc-rx2tr [6.029467291s] May 20 12:09:22.338: INFO: Created: latency-svc-85nrb May 20 12:09:22.379: INFO: Got endpoints: latency-svc-85nrb [5.629995732s] May 20 12:09:22.419: INFO: Created: latency-svc-fwrjq May 20 12:09:22.446: INFO: Got endpoints: latency-svc-fwrjq [4.512667019s] May 20 12:09:22.626: INFO: Created: latency-svc-d6smg May 20 12:09:22.685: INFO: Got endpoints: latency-svc-d6smg [4.42500799s] May 20 12:09:22.823: INFO: Created: latency-svc-vqg2p May 20 12:09:22.825: INFO: Got endpoints: latency-svc-vqg2p [3.956620682s] May 20 12:09:22.888: INFO: Created: latency-svc-bg5cc May 20 12:09:22.918: INFO: Got endpoints: latency-svc-bg5cc [3.166838997s] May 20 12:09:23.002: INFO: Created: latency-svc-htgx9 May 20 12:09:23.005: INFO: Got endpoints: latency-svc-htgx9 [3.195489395s] May 20 12:09:23.034: INFO: Created: latency-svc-lrtd5 May 20 12:09:23.046: INFO: Got endpoints: latency-svc-lrtd5 [2.558730048s] May 20 12:09:23.081: INFO: Created: latency-svc-s255m May 20 12:09:23.170: INFO: Got endpoints: latency-svc-s255m [2.154250985s] May 20 12:09:23.256: INFO: Created: latency-svc-bmjl2 May 20 12:09:23.355: INFO: Got endpoints: latency-svc-bmjl2 [2.307167105s] May 20 12:09:23.417: INFO: Created: latency-svc-fnghc May 20 12:09:23.535: INFO: Got endpoints: latency-svc-fnghc [2.101006102s] May 20 12:09:23.538: INFO: Created: latency-svc-rx6cd May 20 12:09:23.579: INFO: Created: latency-svc-lb6mb May 20 12:09:23.668: INFO: Got endpoints: latency-svc-rx6cd [2.203698047s] May 20 12:09:23.669: INFO: Created: latency-svc-946m5 May 20 12:09:23.702: INFO: Got endpoints: latency-svc-946m5 [1.883794924s] May 20 12:09:23.709: INFO: Got endpoints: latency-svc-lb6mb [2.187225914s] May 20 12:09:23.730: INFO: Created: latency-svc-9rjgw May 20 12:09:23.752: INFO: Got endpoints: latency-svc-9rjgw [1.643567032s] May 20 12:09:23.795: INFO: Created: latency-svc-xp69m May 20 12:09:23.810: INFO: Got endpoints: latency-svc-xp69m [1.532896342s] May 20 12:09:23.833: INFO: Created: latency-svc-9742t May 20 12:09:23.848: INFO: Got endpoints: latency-svc-9742t [1.468175821s] May 20 12:09:23.870: INFO: Created: latency-svc-d86gr May 20 12:09:23.936: INFO: Got endpoints: latency-svc-d86gr [1.490537074s] May 20 12:09:23.975: INFO: Created: latency-svc-j8nrc May 20 12:09:23.979: INFO: Got endpoints: latency-svc-j8nrc [1.293627999s] May 20 12:09:24.007: INFO: Created: latency-svc-n26rk May 20 12:09:24.009: INFO: Got endpoints: latency-svc-n26rk [1.183978863s] May 20 12:09:24.308: INFO: Created: latency-svc-p2rkm May 20 12:09:24.311: INFO: Got endpoints: latency-svc-p2rkm [1.393219857s] May 20 12:09:24.367: INFO: Created: latency-svc-2fppv May 20 12:09:24.369: INFO: Got endpoints: latency-svc-2fppv [1.36428743s] May 20 12:09:24.395: INFO: Created: latency-svc-5nws9 May 20 12:09:24.400: INFO: Got endpoints: latency-svc-5nws9 [1.35369075s] May 20 12:09:24.461: INFO: Created: latency-svc-rwqcn May 20 12:09:24.465: INFO: Got endpoints: latency-svc-rwqcn [1.294512536s] May 20 12:09:24.505: INFO: Created: latency-svc-jbsh5 May 20 12:09:24.519: INFO: Got endpoints: latency-svc-jbsh5 [1.163388597s] May 20 12:09:24.602: INFO: Created: latency-svc-vz2zx May 20 12:09:24.605: INFO: Got endpoints: latency-svc-vz2zx [1.069409895s] May 20 12:09:24.643: INFO: Created: latency-svc-9mf6z May 20 12:09:24.645: INFO: Got endpoints: latency-svc-9mf6z [976.508708ms] May 20 12:09:24.673: INFO: Created: latency-svc-tv8kv May 20 12:09:24.675: INFO: Got endpoints: latency-svc-tv8kv [973.469244ms] May 20 12:09:24.739: INFO: Created: latency-svc-kfrx5 May 20 12:09:24.742: INFO: Got endpoints: latency-svc-kfrx5 [1.032645547s] May 20 12:09:24.768: INFO: Created: latency-svc-l6s54 May 20 12:09:24.785: INFO: Got endpoints: latency-svc-l6s54 [1.032750649s] May 20 12:09:24.811: INFO: Created: latency-svc-g6sk9 May 20 12:09:24.827: INFO: Got endpoints: latency-svc-g6sk9 [1.016715462s] May 20 12:09:24.878: INFO: Created: latency-svc-7dnxr May 20 12:09:24.887: INFO: Got endpoints: latency-svc-7dnxr [1.039065217s] May 20 12:09:25.014: INFO: Created: latency-svc-7n68n May 20 12:09:25.019: INFO: Got endpoints: latency-svc-7n68n [1.082257972s] May 20 12:09:25.046: INFO: Created: latency-svc-cq9sk May 20 12:09:25.074: INFO: Got endpoints: latency-svc-cq9sk [1.094704266s] May 20 12:09:25.105: INFO: Created: latency-svc-smpq2 May 20 12:09:25.158: INFO: Got endpoints: latency-svc-smpq2 [1.148421603s] May 20 12:09:25.178: INFO: Created: latency-svc-ptgxb May 20 12:09:25.189: INFO: Got endpoints: latency-svc-ptgxb [878.046262ms] May 20 12:09:25.207: INFO: Created: latency-svc-bwbnn May 20 12:09:25.218: INFO: Got endpoints: latency-svc-bwbnn [848.580329ms] May 20 12:09:25.237: INFO: Created: latency-svc-gchc6 May 20 12:09:25.248: INFO: Got endpoints: latency-svc-gchc6 [848.176331ms] May 20 12:09:25.248: INFO: Latencies: [251.958601ms 333.707324ms 445.349262ms 529.256582ms 728.100712ms 766.707835ms 766.862056ms 804.101798ms 808.555349ms 828.493949ms 848.176331ms 848.580329ms 855.020207ms 856.770544ms 878.046262ms 881.783051ms 892.375501ms 897.18889ms 902.747865ms 909.725232ms 925.24085ms 938.613483ms 947.044192ms 973.469244ms 976.508708ms 999.745334ms 1.004628088s 1.004955206s 1.016715462s 1.019089758s 1.032645547s 1.032750649s 1.039065217s 1.069409895s 1.070350355s 1.082257972s 1.083922187s 1.094704266s 1.107882262s 1.113793685s 1.113854204s 1.148421603s 1.163388597s 1.167311445s 1.176585254s 1.183978863s 1.205319535s 1.215848703s 1.227501471s 1.232633282s 1.259216255s 1.281935516s 1.283800362s 1.293627999s 1.294512536s 1.309050476s 1.330723436s 1.334035508s 1.340518802s 1.340675272s 1.351893828s 1.35369075s 1.36428743s 1.364333689s 1.365115474s 1.365456827s 1.366044815s 1.373687837s 1.388818358s 1.393219857s 1.411955247s 1.468175821s 1.490537074s 1.517128125s 1.532896342s 1.643567032s 1.647539893s 1.696299738s 1.852021401s 1.855276761s 1.866904132s 1.883794924s 1.892230937s 1.914479204s 2.008282994s 2.045967551s 2.101006102s 2.142181428s 2.154250985s 2.163289514s 2.171599827s 2.180743751s 2.187225914s 2.203698047s 2.260756463s 2.307167105s 2.370958904s 2.43744706s 2.473635492s 2.558730048s 2.689650758s 2.855648972s 2.901732211s 2.915877768s 2.926645489s 2.934359942s 2.940217055s 2.942786711s 2.946351823s 2.950863967s 3.002537853s 3.017761395s 3.152401933s 3.166838997s 3.186896125s 3.195489395s 3.276884467s 3.289562252s 3.298343519s 3.481723744s 3.492192632s 3.55340862s 3.600909197s 3.682331656s 3.705248424s 3.790863674s 3.956620682s 4.035202097s 4.045085517s 4.046401435s 4.118131504s 4.152383946s 4.161717972s 4.169120608s 4.18399321s 4.192421704s 4.210864517s 4.24972716s 4.254904233s 4.274856768s 4.292977248s 4.346871636s 4.42500799s 4.453370877s 4.456009765s 4.466694916s 4.467578418s 4.480043816s 4.484043351s 4.50272499s 4.506235762s 4.512667019s 4.514706328s 4.547428517s 4.576718189s 4.596303767s 4.604983789s 4.60974338s 4.69878646s 4.735259975s 4.742933239s 4.841499377s 5.045568234s 5.218317296s 5.227041812s 5.257676603s 5.518441788s 5.629995732s 5.755858421s 5.768391437s 5.798800018s 6.029467291s 6.124862949s 6.209979263s 6.282404122s 6.360905384s 6.479424493s 6.618145738s 6.663480057s 6.71390281s 6.727226118s 6.753841544s 6.896753325s 7.028166744s 7.10836537s 7.162167865s 7.173984408s 7.179683695s 7.25590001s 7.25871132s 7.341216003s 7.462656499s 7.629956712s 8.006519519s 8.05270868s 8.140333994s 8.208555494s 8.485522099s 8.659510404s 8.693751448s] May 20 12:09:25.248: INFO: 50 %ile: 2.689650758s May 20 12:09:25.248: INFO: 90 %ile: 6.727226118s May 20 12:09:25.248: INFO: 99 %ile: 8.659510404s May 20 12:09:25.248: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:09:25.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-2cxws" for this suite. May 20 12:09:53.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:09:53.317: INFO: namespace: e2e-tests-svc-latency-2cxws, resource: bindings, ignored listing per whitelist May 20 12:09:53.375: INFO: namespace e2e-tests-svc-latency-2cxws deletion completed in 28.084523167s • [SLOW TEST:78.595 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:09:53.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-22fq8 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet May 20 12:09:53.550: INFO: Found 0 stateful pods, waiting for 3 May 20 12:10:03.608: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 20 12:10:03.608: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 20 12:10:03.608: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 20 12:10:13.556: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 20 12:10:13.556: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 20 12:10:13.556: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 20 12:10:13.584: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 20 12:10:23.626: INFO: Updating stateful set ss2 May 20 12:10:23.638: INFO: Waiting for Pod e2e-tests-statefulset-22fq8/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 20 12:10:33.942: INFO: Found 2 stateful pods, waiting for 3 May 20 12:10:43.947: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 20 12:10:43.947: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 20 12:10:43.947: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 20 12:10:43.969: INFO: Updating stateful set ss2 May 20 12:10:43.987: INFO: Waiting for Pod e2e-tests-statefulset-22fq8/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 20 12:10:54.014: INFO: Updating stateful set ss2 May 20 12:10:54.112: INFO: Waiting for StatefulSet e2e-tests-statefulset-22fq8/ss2 to complete update May 20 12:10:54.112: INFO: Waiting for Pod e2e-tests-statefulset-22fq8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 20 12:11:04.119: INFO: Waiting for StatefulSet e2e-tests-statefulset-22fq8/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 20 12:11:14.120: INFO: Deleting all statefulset in ns e2e-tests-statefulset-22fq8 May 20 12:11:14.123: INFO: Scaling statefulset ss2 to 0 May 20 12:11:44.156: INFO: Waiting for statefulset status.replicas updated to 0 May 20 12:11:44.159: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:11:44.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-22fq8" for this suite. May 20 12:11:50.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:11:50.236: INFO: namespace: e2e-tests-statefulset-22fq8, resource: bindings, ignored listing per whitelist May 20 12:11:50.267: INFO: namespace e2e-tests-statefulset-22fq8 deletion completed in 6.085323877s • [SLOW TEST:116.892 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:11:50.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-wpxtb May 20 12:11:54.394: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-wpxtb STEP: checking the pod's current state and verifying that restartCount is present May 20 12:11:54.397: INFO: Initial restart count of pod liveness-http is 0 May 20 12:12:08.442: INFO: Restart count of pod e2e-tests-container-probe-wpxtb/liveness-http is now 1 (14.04492418s elapsed) May 20 12:12:28.496: INFO: Restart count of pod e2e-tests-container-probe-wpxtb/liveness-http is now 2 (34.098947161s elapsed) May 20 12:12:50.652: INFO: Restart count of pod e2e-tests-container-probe-wpxtb/liveness-http is now 3 (56.254616585s elapsed) May 20 12:13:08.688: INFO: Restart count of pod e2e-tests-container-probe-wpxtb/liveness-http is now 4 (1m14.291045481s elapsed) May 20 12:14:14.196: INFO: Restart count of pod e2e-tests-container-probe-wpxtb/liveness-http is now 5 (2m19.798980236s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:14:14.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-wpxtb" for this suite. May 20 12:14:28.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:14:28.480: INFO: namespace: e2e-tests-container-probe-wpxtb, resource: bindings, ignored listing per whitelist May 20 12:14:28.526: INFO: namespace e2e-tests-container-probe-wpxtb deletion completed in 12.558235078s • [SLOW TEST:158.258 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:14:28.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 20 12:14:41.576: INFO: Successfully updated pod "annotationupdate74fea9f2-9a93-11ea-b520-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:14:43.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-548zp" for this suite. May 20 12:15:05.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:15:05.704: INFO: namespace: e2e-tests-projected-548zp, resource: bindings, ignored listing per whitelist May 20 12:15:05.758: INFO: namespace e2e-tests-projected-548zp deletion completed in 22.145467846s • [SLOW TEST:37.232 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:15:05.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 20 12:15:05.857: INFO: Waiting up to 5m0s for pod "pod-8a3f9aab-9a93-11ea-b520-0242ac110018" in namespace "e2e-tests-emptydir-v2n67" to be "success or failure" May 20 12:15:05.918: INFO: Pod "pod-8a3f9aab-9a93-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 61.571617ms May 20 12:15:07.921: INFO: Pod "pod-8a3f9aab-9a93-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064364527s May 20 12:15:09.925: INFO: Pod "pod-8a3f9aab-9a93-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.067953206s May 20 12:15:11.929: INFO: Pod "pod-8a3f9aab-9a93-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.072464749s STEP: Saw pod success May 20 12:15:11.929: INFO: Pod "pod-8a3f9aab-9a93-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:15:11.932: INFO: Trying to get logs from node hunter-worker2 pod pod-8a3f9aab-9a93-11ea-b520-0242ac110018 container test-container: STEP: delete the pod May 20 12:15:11.955: INFO: Waiting for pod pod-8a3f9aab-9a93-11ea-b520-0242ac110018 to disappear May 20 12:15:11.961: INFO: Pod pod-8a3f9aab-9a93-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:15:11.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-v2n67" for this suite. May 20 12:15:17.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:15:18.037: INFO: namespace: e2e-tests-emptydir-v2n67, resource: bindings, ignored listing per whitelist May 20 12:15:18.065: INFO: namespace e2e-tests-emptydir-v2n67 deletion completed in 6.094404618s • [SLOW TEST:12.307 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:15:18.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-f4dfv May 20 12:15:22.226: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-f4dfv STEP: checking the pod's current state and verifying that restartCount is present May 20 12:15:22.228: INFO: Initial restart count of pod liveness-exec is 0 May 20 12:16:18.427: INFO: Restart count of pod e2e-tests-container-probe-f4dfv/liveness-exec is now 1 (56.198830313s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:16:18.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-f4dfv" for this suite. May 20 12:16:24.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:16:24.540: INFO: namespace: e2e-tests-container-probe-f4dfv, resource: bindings, ignored listing per whitelist May 20 12:16:24.569: INFO: namespace e2e-tests-container-probe-f4dfv deletion completed in 6.074792717s • [SLOW TEST:66.504 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:16:24.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 20 12:16:24.678: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b93b0655-9a93-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-4r4hk" to be "success or failure" May 20 12:16:24.683: INFO: Pod "downwardapi-volume-b93b0655-9a93-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.360649ms May 20 12:16:26.687: INFO: Pod "downwardapi-volume-b93b0655-9a93-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00861172s May 20 12:16:28.691: INFO: Pod "downwardapi-volume-b93b0655-9a93-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013079128s STEP: Saw pod success May 20 12:16:28.691: INFO: Pod "downwardapi-volume-b93b0655-9a93-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:16:28.696: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-b93b0655-9a93-11ea-b520-0242ac110018 container client-container: STEP: delete the pod May 20 12:16:28.726: INFO: Waiting for pod downwardapi-volume-b93b0655-9a93-11ea-b520-0242ac110018 to disappear May 20 12:16:28.751: INFO: Pod downwardapi-volume-b93b0655-9a93-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:16:28.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4r4hk" for this suite. May 20 12:16:34.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:16:34.848: INFO: namespace: e2e-tests-projected-4r4hk, resource: bindings, ignored listing per whitelist May 20 12:16:34.895: INFO: namespace e2e-tests-projected-4r4hk deletion completed in 6.139906582s • [SLOW TEST:10.325 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:16:34.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-bf66a611-9a93-11ea-b520-0242ac110018 STEP: Creating a pod to test consume secrets May 20 12:16:35.175: INFO: Waiting up to 5m0s for pod "pod-secrets-bf79fc0f-9a93-11ea-b520-0242ac110018" in namespace "e2e-tests-secrets-qg9jv" to be "success or failure" May 20 12:16:35.209: INFO: Pod "pod-secrets-bf79fc0f-9a93-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 33.724194ms May 20 12:16:37.278: INFO: Pod "pod-secrets-bf79fc0f-9a93-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102787471s May 20 12:16:39.282: INFO: Pod "pod-secrets-bf79fc0f-9a93-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.106687221s STEP: Saw pod success May 20 12:16:39.282: INFO: Pod "pod-secrets-bf79fc0f-9a93-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:16:39.284: INFO: Trying to get logs from node hunter-worker pod pod-secrets-bf79fc0f-9a93-11ea-b520-0242ac110018 container secret-volume-test: STEP: delete the pod May 20 12:16:39.335: INFO: Waiting for pod pod-secrets-bf79fc0f-9a93-11ea-b520-0242ac110018 to disappear May 20 12:16:39.339: INFO: Pod pod-secrets-bf79fc0f-9a93-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:16:39.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-qg9jv" for this suite. May 20 12:16:45.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:16:45.396: INFO: namespace: e2e-tests-secrets-qg9jv, resource: bindings, ignored listing per whitelist May 20 12:16:45.444: INFO: namespace e2e-tests-secrets-qg9jv deletion completed in 6.101329325s STEP: Destroying namespace "e2e-tests-secret-namespace-s6vrl" for this suite. May 20 12:16:51.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:16:51.504: INFO: namespace: e2e-tests-secret-namespace-s6vrl, resource: bindings, ignored listing per whitelist May 20 12:16:51.512: INFO: namespace e2e-tests-secret-namespace-s6vrl deletion completed in 6.067713842s • [SLOW TEST:16.617 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:16:51.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 20 12:16:51.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-m8txv' May 20 12:16:54.287: INFO: stderr: "" May 20 12:16:54.288: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 20 12:16:55.293: INFO: Selector matched 1 pods for map[app:redis] May 20 12:16:55.293: INFO: Found 0 / 1 May 20 12:16:56.292: INFO: Selector matched 1 pods for map[app:redis] May 20 12:16:56.292: INFO: Found 0 / 1 May 20 12:16:57.291: INFO: Selector matched 1 pods for map[app:redis] May 20 12:16:57.291: INFO: Found 0 / 1 May 20 12:16:58.292: INFO: Selector matched 1 pods for map[app:redis] May 20 12:16:58.292: INFO: Found 1 / 1 May 20 12:16:58.292: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 20 12:16:58.295: INFO: Selector matched 1 pods for map[app:redis] May 20 12:16:58.295: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 20 12:16:58.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-cl2zc --namespace=e2e-tests-kubectl-m8txv -p {"metadata":{"annotations":{"x":"y"}}}' May 20 12:16:58.402: INFO: stderr: "" May 20 12:16:58.402: INFO: stdout: "pod/redis-master-cl2zc patched\n" STEP: checking annotations May 20 12:16:58.422: INFO: Selector matched 1 pods for map[app:redis] May 20 12:16:58.422: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:16:58.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-m8txv" for this suite. May 20 12:17:22.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:17:22.513: INFO: namespace: e2e-tests-kubectl-m8txv, resource: bindings, ignored listing per whitelist May 20 12:17:22.574: INFO: namespace e2e-tests-kubectl-m8txv deletion completed in 24.148622489s • [SLOW TEST:31.062 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:17:22.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-7wwkd STEP: creating a selector STEP: Creating the service pods in kubernetes May 20 12:17:22.639: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 20 12:17:47.229: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.206:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-7wwkd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 12:17:47.230: INFO: >>> kubeConfig: /root/.kube/config I0520 12:17:47.258384 7 log.go:172] (0xc001ca8420) (0xc0012f3040) Create stream I0520 12:17:47.258420 7 log.go:172] (0xc001ca8420) (0xc0012f3040) Stream added, broadcasting: 1 I0520 12:17:47.260187 7 log.go:172] (0xc001ca8420) Reply frame received for 1 I0520 12:17:47.260223 7 log.go:172] (0xc001ca8420) (0xc001dcfae0) Create stream I0520 12:17:47.260233 7 log.go:172] (0xc001ca8420) (0xc001dcfae0) Stream added, broadcasting: 3 I0520 12:17:47.261009 7 log.go:172] (0xc001ca8420) Reply frame received for 3 I0520 12:17:47.261047 7 log.go:172] (0xc001ca8420) (0xc001ceec80) Create stream I0520 12:17:47.261068 7 log.go:172] (0xc001ca8420) (0xc001ceec80) Stream added, broadcasting: 5 I0520 12:17:47.262038 7 log.go:172] (0xc001ca8420) Reply frame received for 5 I0520 12:17:47.382440 7 log.go:172] (0xc001ca8420) Data frame received for 3 I0520 12:17:47.382470 7 log.go:172] (0xc001dcfae0) (3) Data frame handling I0520 12:17:47.382489 7 log.go:172] (0xc001dcfae0) (3) Data frame sent I0520 12:17:47.382943 7 log.go:172] (0xc001ca8420) Data frame received for 5 I0520 12:17:47.382961 7 log.go:172] (0xc001ceec80) (5) Data frame handling I0520 12:17:47.382982 7 log.go:172] (0xc001ca8420) Data frame received for 3 I0520 12:17:47.382996 7 log.go:172] (0xc001dcfae0) (3) Data frame handling I0520 12:17:47.387429 7 log.go:172] (0xc001ca8420) Data frame received for 1 I0520 12:17:47.387465 7 log.go:172] (0xc0012f3040) (1) Data frame handling I0520 12:17:47.387487 7 log.go:172] (0xc0012f3040) (1) Data frame sent I0520 12:17:47.387509 7 log.go:172] (0xc001ca8420) (0xc0012f3040) Stream removed, broadcasting: 1 I0520 12:17:47.387612 7 log.go:172] (0xc001ca8420) (0xc0012f3040) Stream removed, broadcasting: 1 I0520 12:17:47.387635 7 log.go:172] (0xc001ca8420) (0xc001dcfae0) Stream removed, broadcasting: 3 I0520 12:17:47.387654 7 log.go:172] (0xc001ca8420) (0xc001ceec80) Stream removed, broadcasting: 5 May 20 12:17:47.387: INFO: Found all expected endpoints: [netserver-0] I0520 12:17:47.388030 7 log.go:172] (0xc001ca8420) Go away received May 20 12:17:47.395: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.171:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-7wwkd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 12:17:47.395: INFO: >>> kubeConfig: /root/.kube/config I0520 12:17:47.419887 7 log.go:172] (0xc001ca88f0) (0xc0012f35e0) Create stream I0520 12:17:47.419919 7 log.go:172] (0xc001ca88f0) (0xc0012f35e0) Stream added, broadcasting: 1 I0520 12:17:47.422017 7 log.go:172] (0xc001ca88f0) Reply frame received for 1 I0520 12:17:47.422061 7 log.go:172] (0xc001ca88f0) (0xc0012f3680) Create stream I0520 12:17:47.422077 7 log.go:172] (0xc001ca88f0) (0xc0012f3680) Stream added, broadcasting: 3 I0520 12:17:47.422794 7 log.go:172] (0xc001ca88f0) Reply frame received for 3 I0520 12:17:47.422828 7 log.go:172] (0xc001ca88f0) (0xc0021ed9a0) Create stream I0520 12:17:47.422847 7 log.go:172] (0xc001ca88f0) (0xc0021ed9a0) Stream added, broadcasting: 5 I0520 12:17:47.423491 7 log.go:172] (0xc001ca88f0) Reply frame received for 5 I0520 12:17:47.508366 7 log.go:172] (0xc001ca88f0) Data frame received for 5 I0520 12:17:47.508403 7 log.go:172] (0xc0021ed9a0) (5) Data frame handling I0520 12:17:47.508434 7 log.go:172] (0xc001ca88f0) Data frame received for 3 I0520 12:17:47.508454 7 log.go:172] (0xc0012f3680) (3) Data frame handling I0520 12:17:47.508471 7 log.go:172] (0xc0012f3680) (3) Data frame sent I0520 12:17:47.508481 7 log.go:172] (0xc001ca88f0) Data frame received for 3 I0520 12:17:47.508489 7 log.go:172] (0xc0012f3680) (3) Data frame handling I0520 12:17:47.510242 7 log.go:172] (0xc001ca88f0) Data frame received for 1 I0520 12:17:47.510326 7 log.go:172] (0xc0012f35e0) (1) Data frame handling I0520 12:17:47.510399 7 log.go:172] (0xc0012f35e0) (1) Data frame sent I0520 12:17:47.510438 7 log.go:172] (0xc001ca88f0) (0xc0012f35e0) Stream removed, broadcasting: 1 I0520 12:17:47.510469 7 log.go:172] (0xc001ca88f0) Go away received I0520 12:17:47.510544 7 log.go:172] (0xc001ca88f0) (0xc0012f35e0) Stream removed, broadcasting: 1 I0520 12:17:47.510607 7 log.go:172] (0xc001ca88f0) (0xc0012f3680) Stream removed, broadcasting: 3 I0520 12:17:47.510627 7 log.go:172] (0xc001ca88f0) (0xc0021ed9a0) Stream removed, broadcasting: 5 May 20 12:17:47.510: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:17:47.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-7wwkd" for this suite. May 20 12:18:11.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:18:11.616: INFO: namespace: e2e-tests-pod-network-test-7wwkd, resource: bindings, ignored listing per whitelist May 20 12:18:11.675: INFO: namespace e2e-tests-pod-network-test-7wwkd deletion completed in 24.161074451s • [SLOW TEST:49.101 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:18:11.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-f9129edd-9a93-11ea-b520-0242ac110018 STEP: Creating a pod to test consume configMaps May 20 12:18:11.795: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f91308c6-9a93-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-wsvhq" to be "success or failure" May 20 12:18:11.816: INFO: Pod "pod-projected-configmaps-f91308c6-9a93-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 21.258583ms May 20 12:18:13.820: INFO: Pod "pod-projected-configmaps-f91308c6-9a93-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025332233s May 20 12:18:15.824: INFO: Pod "pod-projected-configmaps-f91308c6-9a93-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029036068s May 20 12:18:17.828: INFO: Pod "pod-projected-configmaps-f91308c6-9a93-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032813564s STEP: Saw pod success May 20 12:18:17.828: INFO: Pod "pod-projected-configmaps-f91308c6-9a93-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:18:17.830: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-f91308c6-9a93-11ea-b520-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 20 12:18:17.904: INFO: Waiting for pod pod-projected-configmaps-f91308c6-9a93-11ea-b520-0242ac110018 to disappear May 20 12:18:17.911: INFO: Pod pod-projected-configmaps-f91308c6-9a93-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:18:17.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wsvhq" for this suite. May 20 12:18:23.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:18:24.042: INFO: namespace: e2e-tests-projected-wsvhq, resource: bindings, ignored listing per whitelist May 20 12:18:24.054: INFO: namespace e2e-tests-projected-wsvhq deletion completed in 6.139143621s • [SLOW TEST:12.378 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:18:24.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:18:28.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-fgjrl" for this suite. May 20 12:18:34.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:18:34.306: INFO: namespace: e2e-tests-emptydir-wrapper-fgjrl, resource: bindings, ignored listing per whitelist May 20 12:18:34.352: INFO: namespace e2e-tests-emptydir-wrapper-fgjrl deletion completed in 6.073120382s • [SLOW TEST:10.297 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:18:34.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-t4wf STEP: Creating a pod to test atomic-volume-subpath May 20 12:18:34.490: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-t4wf" in namespace "e2e-tests-subpath-7lrlc" to be "success or failure" May 20 12:18:34.516: INFO: Pod "pod-subpath-test-configmap-t4wf": Phase="Pending", Reason="", readiness=false. Elapsed: 25.508944ms May 20 12:18:36.520: INFO: Pod "pod-subpath-test-configmap-t4wf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030024225s May 20 12:18:38.523: INFO: Pod "pod-subpath-test-configmap-t4wf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032803754s May 20 12:18:40.527: INFO: Pod "pod-subpath-test-configmap-t4wf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036329811s May 20 12:18:42.592: INFO: Pod "pod-subpath-test-configmap-t4wf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101496086s May 20 12:18:44.595: INFO: Pod "pod-subpath-test-configmap-t4wf": Phase="Running", Reason="", readiness=false. Elapsed: 10.104254018s May 20 12:18:46.598: INFO: Pod "pod-subpath-test-configmap-t4wf": Phase="Running", Reason="", readiness=false. Elapsed: 12.107774748s May 20 12:18:48.602: INFO: Pod "pod-subpath-test-configmap-t4wf": Phase="Running", Reason="", readiness=false. Elapsed: 14.111423404s May 20 12:18:50.606: INFO: Pod "pod-subpath-test-configmap-t4wf": Phase="Running", Reason="", readiness=false. Elapsed: 16.115594862s May 20 12:18:52.609: INFO: Pod "pod-subpath-test-configmap-t4wf": Phase="Running", Reason="", readiness=false. Elapsed: 18.119184974s May 20 12:18:54.612: INFO: Pod "pod-subpath-test-configmap-t4wf": Phase="Running", Reason="", readiness=false. Elapsed: 20.121714331s May 20 12:18:56.615: INFO: Pod "pod-subpath-test-configmap-t4wf": Phase="Running", Reason="", readiness=false. Elapsed: 22.125165713s May 20 12:18:58.620: INFO: Pod "pod-subpath-test-configmap-t4wf": Phase="Running", Reason="", readiness=false. Elapsed: 24.129319904s May 20 12:19:00.623: INFO: Pod "pod-subpath-test-configmap-t4wf": Phase="Running", Reason="", readiness=false. Elapsed: 26.132330618s May 20 12:19:02.626: INFO: Pod "pod-subpath-test-configmap-t4wf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.135864583s STEP: Saw pod success May 20 12:19:02.626: INFO: Pod "pod-subpath-test-configmap-t4wf" satisfied condition "success or failure" May 20 12:19:02.629: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-t4wf container test-container-subpath-configmap-t4wf: STEP: delete the pod May 20 12:19:02.659: INFO: Waiting for pod pod-subpath-test-configmap-t4wf to disappear May 20 12:19:02.668: INFO: Pod pod-subpath-test-configmap-t4wf no longer exists STEP: Deleting pod pod-subpath-test-configmap-t4wf May 20 12:19:02.668: INFO: Deleting pod "pod-subpath-test-configmap-t4wf" in namespace "e2e-tests-subpath-7lrlc" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:19:02.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-7lrlc" for this suite. May 20 12:19:08.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:19:08.724: INFO: namespace: e2e-tests-subpath-7lrlc, resource: bindings, ignored listing per whitelist May 20 12:19:08.744: INFO: namespace e2e-tests-subpath-7lrlc deletion completed in 6.07131427s • [SLOW TEST:34.392 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:19:08.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 20 12:19:08.886: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b19e2ae-9a94-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-v968d" to be "success or failure" May 20 12:19:08.890: INFO: Pod "downwardapi-volume-1b19e2ae-9a94-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.73799ms May 20 12:19:10.893: INFO: Pod "downwardapi-volume-1b19e2ae-9a94-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007283892s May 20 12:19:12.897: INFO: Pod "downwardapi-volume-1b19e2ae-9a94-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.01099552s May 20 12:19:14.900: INFO: Pod "downwardapi-volume-1b19e2ae-9a94-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01388281s STEP: Saw pod success May 20 12:19:14.900: INFO: Pod "downwardapi-volume-1b19e2ae-9a94-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:19:14.902: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-1b19e2ae-9a94-11ea-b520-0242ac110018 container client-container: STEP: delete the pod May 20 12:19:14.942: INFO: Waiting for pod downwardapi-volume-1b19e2ae-9a94-11ea-b520-0242ac110018 to disappear May 20 12:19:14.960: INFO: Pod downwardapi-volume-1b19e2ae-9a94-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:19:14.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-v968d" for this suite. May 20 12:19:20.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:19:21.032: INFO: namespace: e2e-tests-projected-v968d, resource: bindings, ignored listing per whitelist May 20 12:19:21.037: INFO: namespace e2e-tests-projected-v968d deletion completed in 6.074134368s • [SLOW TEST:12.292 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:19:21.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 20 12:19:21.178: INFO: Waiting up to 5m0s for pod "downwardapi-volume-226d2da4-9a94-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-whhrt" to be "success or failure" May 20 12:19:21.192: INFO: Pod "downwardapi-volume-226d2da4-9a94-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.368759ms May 20 12:19:23.227: INFO: Pod "downwardapi-volume-226d2da4-9a94-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048682243s May 20 12:19:25.230: INFO: Pod "downwardapi-volume-226d2da4-9a94-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052112755s STEP: Saw pod success May 20 12:19:25.230: INFO: Pod "downwardapi-volume-226d2da4-9a94-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:19:25.232: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-226d2da4-9a94-11ea-b520-0242ac110018 container client-container: STEP: delete the pod May 20 12:19:25.408: INFO: Waiting for pod downwardapi-volume-226d2da4-9a94-11ea-b520-0242ac110018 to disappear May 20 12:19:25.478: INFO: Pod downwardapi-volume-226d2da4-9a94-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:19:25.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-whhrt" for this suite. May 20 12:19:31.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:19:31.553: INFO: namespace: e2e-tests-projected-whhrt, resource: bindings, ignored listing per whitelist May 20 12:19:31.582: INFO: namespace e2e-tests-projected-whhrt deletion completed in 6.101616641s • [SLOW TEST:10.545 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:19:31.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 20 12:19:31.726: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28b5ecc6-9a94-11ea-b520-0242ac110018" in namespace "e2e-tests-downward-api-4w69w" to be "success or failure" May 20 12:19:31.771: INFO: Pod "downwardapi-volume-28b5ecc6-9a94-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 44.466698ms May 20 12:19:33.775: INFO: Pod "downwardapi-volume-28b5ecc6-9a94-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048527019s May 20 12:19:35.778: INFO: Pod "downwardapi-volume-28b5ecc6-9a94-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051611856s May 20 12:19:37.781: INFO: Pod "downwardapi-volume-28b5ecc6-9a94-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05505327s STEP: Saw pod success May 20 12:19:37.781: INFO: Pod "downwardapi-volume-28b5ecc6-9a94-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:19:37.784: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-28b5ecc6-9a94-11ea-b520-0242ac110018 container client-container: STEP: delete the pod May 20 12:19:37.814: INFO: Waiting for pod downwardapi-volume-28b5ecc6-9a94-11ea-b520-0242ac110018 to disappear May 20 12:19:37.825: INFO: Pod downwardapi-volume-28b5ecc6-9a94-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:19:37.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4w69w" for this suite. May 20 12:19:43.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:19:43.882: INFO: namespace: e2e-tests-downward-api-4w69w, resource: bindings, ignored listing per whitelist May 20 12:19:43.904: INFO: namespace e2e-tests-downward-api-4w69w deletion completed in 6.076388614s • [SLOW TEST:12.322 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:19:43.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 20 12:19:44.064: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30119399-9a94-11ea-b520-0242ac110018" in namespace "e2e-tests-downward-api-qvhkg" to be "success or failure" May 20 12:19:44.068: INFO: Pod "downwardapi-volume-30119399-9a94-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.679679ms May 20 12:19:46.071: INFO: Pod "downwardapi-volume-30119399-9a94-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006557266s May 20 12:19:48.075: INFO: Pod "downwardapi-volume-30119399-9a94-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010734982s STEP: Saw pod success May 20 12:19:48.075: INFO: Pod "downwardapi-volume-30119399-9a94-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:19:48.078: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-30119399-9a94-11ea-b520-0242ac110018 container client-container: STEP: delete the pod May 20 12:19:48.100: INFO: Waiting for pod downwardapi-volume-30119399-9a94-11ea-b520-0242ac110018 to disappear May 20 12:19:48.117: INFO: Pod downwardapi-volume-30119399-9a94-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:19:48.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qvhkg" for this suite. May 20 12:19:54.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:19:54.174: INFO: namespace: e2e-tests-downward-api-qvhkg, resource: bindings, ignored listing per whitelist May 20 12:19:54.192: INFO: namespace e2e-tests-downward-api-qvhkg deletion completed in 6.072656754s • [SLOW TEST:10.288 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:19:54.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-b2gsf/configmap-test-36342bc3-9a94-11ea-b520-0242ac110018 STEP: Creating a pod to test consume configMaps May 20 12:19:54.384: INFO: Waiting up to 5m0s for pod "pod-configmaps-3636f11e-9a94-11ea-b520-0242ac110018" in namespace "e2e-tests-configmap-b2gsf" to be "success or failure" May 20 12:19:54.386: INFO: Pod "pod-configmaps-3636f11e-9a94-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.753482ms May 20 12:19:56.401: INFO: Pod "pod-configmaps-3636f11e-9a94-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01759831s May 20 12:19:58.546: INFO: Pod "pod-configmaps-3636f11e-9a94-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.162324877s STEP: Saw pod success May 20 12:19:58.546: INFO: Pod "pod-configmaps-3636f11e-9a94-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:19:58.550: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-3636f11e-9a94-11ea-b520-0242ac110018 container env-test: STEP: delete the pod May 20 12:19:58.598: INFO: Waiting for pod pod-configmaps-3636f11e-9a94-11ea-b520-0242ac110018 to disappear May 20 12:19:58.604: INFO: Pod pod-configmaps-3636f11e-9a94-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:19:58.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-b2gsf" for this suite. May 20 12:20:04.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:20:04.681: INFO: namespace: e2e-tests-configmap-b2gsf, resource: bindings, ignored listing per whitelist May 20 12:20:04.715: INFO: namespace e2e-tests-configmap-b2gsf deletion completed in 6.107484036s • [SLOW TEST:10.522 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:20:04.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 20 12:20:04.821: INFO: Waiting up to 5m0s for pod "pod-3c71d473-9a94-11ea-b520-0242ac110018" in namespace "e2e-tests-emptydir-dxn8h" to be "success or failure" May 20 12:20:04.860: INFO: Pod "pod-3c71d473-9a94-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 38.920123ms May 20 12:20:06.864: INFO: Pod "pod-3c71d473-9a94-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042868492s May 20 12:20:08.868: INFO: Pod "pod-3c71d473-9a94-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.046715391s May 20 12:20:10.872: INFO: Pod "pod-3c71d473-9a94-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051108886s STEP: Saw pod success May 20 12:20:10.872: INFO: Pod "pod-3c71d473-9a94-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:20:10.874: INFO: Trying to get logs from node hunter-worker pod pod-3c71d473-9a94-11ea-b520-0242ac110018 container test-container: STEP: delete the pod May 20 12:20:10.914: INFO: Waiting for pod pod-3c71d473-9a94-11ea-b520-0242ac110018 to disappear May 20 12:20:10.927: INFO: Pod pod-3c71d473-9a94-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:20:10.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dxn8h" for this suite. May 20 12:20:16.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:20:17.100: INFO: namespace: e2e-tests-emptydir-dxn8h, resource: bindings, ignored listing per whitelist May 20 12:20:17.103: INFO: namespace e2e-tests-emptydir-dxn8h deletion completed in 6.17219271s • [SLOW TEST:12.388 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:20:17.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-pdjn2 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-pdjn2 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-pdjn2 May 20 12:20:17.321: INFO: Found 0 stateful pods, waiting for 1 May 20 12:20:27.326: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 20 12:20:27.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 20 12:20:27.580: INFO: stderr: "I0520 12:20:27.442900 2746 log.go:172] (0xc0005ac210) (0xc0002d5400) Create stream\nI0520 12:20:27.442946 2746 log.go:172] (0xc0005ac210) (0xc0002d5400) Stream added, broadcasting: 1\nI0520 12:20:27.444985 2746 log.go:172] (0xc0005ac210) Reply frame received for 1\nI0520 12:20:27.445017 2746 log.go:172] (0xc0005ac210) (0xc000844000) Create stream\nI0520 12:20:27.445024 2746 log.go:172] (0xc0005ac210) (0xc000844000) Stream added, broadcasting: 3\nI0520 12:20:27.446153 2746 log.go:172] (0xc0005ac210) Reply frame received for 3\nI0520 12:20:27.446179 2746 log.go:172] (0xc0005ac210) (0xc0002d54a0) Create stream\nI0520 12:20:27.446191 2746 log.go:172] (0xc0005ac210) (0xc0002d54a0) Stream added, broadcasting: 5\nI0520 12:20:27.447033 2746 log.go:172] (0xc0005ac210) Reply frame received for 5\nI0520 12:20:27.576014 2746 log.go:172] (0xc0005ac210) Data frame received for 5\nI0520 12:20:27.576042 2746 log.go:172] (0xc0002d54a0) (5) Data frame handling\nI0520 12:20:27.576060 2746 log.go:172] (0xc0005ac210) Data frame received for 3\nI0520 12:20:27.576068 2746 log.go:172] (0xc000844000) (3) Data frame handling\nI0520 12:20:27.576076 2746 log.go:172] (0xc000844000) (3) Data frame sent\nI0520 12:20:27.576341 2746 log.go:172] (0xc0005ac210) Data frame received for 3\nI0520 12:20:27.576357 2746 log.go:172] (0xc000844000) (3) Data frame handling\nI0520 12:20:27.578058 2746 log.go:172] (0xc0005ac210) Data frame received for 1\nI0520 12:20:27.578079 2746 log.go:172] (0xc0002d5400) (1) Data frame handling\nI0520 12:20:27.578103 2746 log.go:172] (0xc0002d5400) (1) Data frame sent\nI0520 12:20:27.578120 2746 log.go:172] (0xc0005ac210) (0xc0002d5400) Stream removed, broadcasting: 1\nI0520 12:20:27.578163 2746 log.go:172] (0xc0005ac210) Go away received\nI0520 12:20:27.578291 2746 log.go:172] (0xc0005ac210) (0xc0002d5400) Stream removed, broadcasting: 1\nI0520 12:20:27.578310 2746 log.go:172] (0xc0005ac210) (0xc000844000) Stream removed, broadcasting: 3\nI0520 12:20:27.578323 2746 log.go:172] (0xc0005ac210) (0xc0002d54a0) Stream removed, broadcasting: 5\n" May 20 12:20:27.580: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 20 12:20:27.580: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 20 12:20:27.583: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 20 12:20:37.587: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 20 12:20:37.587: INFO: Waiting for statefulset status.replicas updated to 0 May 20 12:20:37.605: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999647s May 20 12:20:38.620: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990491257s May 20 12:20:39.625: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.974455003s May 20 12:20:40.629: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.969736558s May 20 12:20:41.634: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.965730849s May 20 12:20:42.638: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.961209562s May 20 12:20:43.642: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.957585667s May 20 12:20:44.646: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.952805063s May 20 12:20:45.650: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.948721819s May 20 12:20:46.655: INFO: Verifying statefulset ss doesn't scale past 1 for another 944.716051ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-pdjn2 May 20 12:20:47.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:20:47.826: INFO: stderr: "I0520 12:20:47.773305 2767 log.go:172] (0xc0001309a0) (0xc00002b680) Create stream\nI0520 12:20:47.773346 2767 log.go:172] (0xc0001309a0) (0xc00002b680) Stream added, broadcasting: 1\nI0520 12:20:47.774621 2767 log.go:172] (0xc0001309a0) Reply frame received for 1\nI0520 12:20:47.774668 2767 log.go:172] (0xc0001309a0) (0xc000882000) Create stream\nI0520 12:20:47.774681 2767 log.go:172] (0xc0001309a0) (0xc000882000) Stream added, broadcasting: 3\nI0520 12:20:47.775343 2767 log.go:172] (0xc0001309a0) Reply frame received for 3\nI0520 12:20:47.775374 2767 log.go:172] (0xc0001309a0) (0xc00002b720) Create stream\nI0520 12:20:47.775397 2767 log.go:172] (0xc0001309a0) (0xc00002b720) Stream added, broadcasting: 5\nI0520 12:20:47.776008 2767 log.go:172] (0xc0001309a0) Reply frame received for 5\nI0520 12:20:47.820861 2767 log.go:172] (0xc0001309a0) Data frame received for 5\nI0520 12:20:47.820924 2767 log.go:172] (0xc00002b720) (5) Data frame handling\nI0520 12:20:47.820958 2767 log.go:172] (0xc0001309a0) Data frame received for 3\nI0520 12:20:47.820972 2767 log.go:172] (0xc000882000) (3) Data frame handling\nI0520 12:20:47.820985 2767 log.go:172] (0xc000882000) (3) Data frame sent\nI0520 12:20:47.820996 2767 log.go:172] (0xc0001309a0) Data frame received for 3\nI0520 12:20:47.821011 2767 log.go:172] (0xc000882000) (3) Data frame handling\nI0520 12:20:47.821728 2767 log.go:172] (0xc0001309a0) Data frame received for 1\nI0520 12:20:47.821740 2767 log.go:172] (0xc00002b680) (1) Data frame handling\nI0520 12:20:47.821746 2767 log.go:172] (0xc00002b680) (1) Data frame sent\nI0520 12:20:47.821752 2767 log.go:172] (0xc0001309a0) (0xc00002b680) Stream removed, broadcasting: 1\nI0520 12:20:47.821886 2767 log.go:172] (0xc0001309a0) (0xc00002b680) Stream removed, broadcasting: 1\nI0520 12:20:47.821898 2767 log.go:172] (0xc0001309a0) (0xc000882000) Stream removed, broadcasting: 3\nI0520 12:20:47.821987 2767 log.go:172] (0xc0001309a0) Go away received\nI0520 12:20:47.822028 2767 log.go:172] (0xc0001309a0) (0xc00002b720) Stream removed, broadcasting: 5\n" May 20 12:20:47.826: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 20 12:20:47.826: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 20 12:20:47.829: INFO: Found 1 stateful pods, waiting for 3 May 20 12:20:57.834: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 20 12:20:57.834: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 20 12:20:57.834: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 20 12:20:57.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 20 12:20:58.052: INFO: stderr: "I0520 12:20:57.977526 2789 log.go:172] (0xc000154790) (0xc000685540) Create stream\nI0520 12:20:57.977603 2789 log.go:172] (0xc000154790) (0xc000685540) Stream added, broadcasting: 1\nI0520 12:20:57.979800 2789 log.go:172] (0xc000154790) Reply frame received for 1\nI0520 12:20:57.979847 2789 log.go:172] (0xc000154790) (0xc0006855e0) Create stream\nI0520 12:20:57.979861 2789 log.go:172] (0xc000154790) (0xc0006855e0) Stream added, broadcasting: 3\nI0520 12:20:57.980680 2789 log.go:172] (0xc000154790) Reply frame received for 3\nI0520 12:20:57.980718 2789 log.go:172] (0xc000154790) (0xc0003b0000) Create stream\nI0520 12:20:57.980739 2789 log.go:172] (0xc000154790) (0xc0003b0000) Stream added, broadcasting: 5\nI0520 12:20:57.981699 2789 log.go:172] (0xc000154790) Reply frame received for 5\nI0520 12:20:58.041921 2789 log.go:172] (0xc000154790) Data frame received for 5\nI0520 12:20:58.041972 2789 log.go:172] (0xc000154790) Data frame received for 3\nI0520 12:20:58.041995 2789 log.go:172] (0xc0006855e0) (3) Data frame handling\nI0520 12:20:58.042006 2789 log.go:172] (0xc0006855e0) (3) Data frame sent\nI0520 12:20:58.042014 2789 log.go:172] (0xc000154790) Data frame received for 3\nI0520 12:20:58.042030 2789 log.go:172] (0xc0006855e0) (3) Data frame handling\nI0520 12:20:58.042074 2789 log.go:172] (0xc0003b0000) (5) Data frame handling\nI0520 12:20:58.043789 2789 log.go:172] (0xc000154790) Data frame received for 1\nI0520 12:20:58.043851 2789 log.go:172] (0xc000685540) (1) Data frame handling\nI0520 12:20:58.043891 2789 log.go:172] (0xc000685540) (1) Data frame sent\nI0520 12:20:58.043934 2789 log.go:172] (0xc000154790) (0xc000685540) Stream removed, broadcasting: 1\nI0520 12:20:58.044035 2789 log.go:172] (0xc000154790) Go away received\nI0520 12:20:58.044148 2789 log.go:172] (0xc000154790) (0xc000685540) Stream removed, broadcasting: 1\nI0520 12:20:58.044163 2789 log.go:172] (0xc000154790) (0xc0006855e0) Stream removed, broadcasting: 3\nI0520 12:20:58.044172 2789 log.go:172] (0xc000154790) (0xc0003b0000) Stream removed, broadcasting: 5\n" May 20 12:20:58.052: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 20 12:20:58.052: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 20 12:20:58.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 20 12:20:58.353: INFO: stderr: "I0520 12:20:58.227184 2811 log.go:172] (0xc00014e580) (0xc0007225a0) Create stream\nI0520 12:20:58.227243 2811 log.go:172] (0xc00014e580) (0xc0007225a0) Stream added, broadcasting: 1\nI0520 12:20:58.229023 2811 log.go:172] (0xc00014e580) Reply frame received for 1\nI0520 12:20:58.229061 2811 log.go:172] (0xc00014e580) (0xc0007c6c80) Create stream\nI0520 12:20:58.229071 2811 log.go:172] (0xc00014e580) (0xc0007c6c80) Stream added, broadcasting: 3\nI0520 12:20:58.229920 2811 log.go:172] (0xc00014e580) Reply frame received for 3\nI0520 12:20:58.229958 2811 log.go:172] (0xc00014e580) (0xc00035e000) Create stream\nI0520 12:20:58.229970 2811 log.go:172] (0xc00014e580) (0xc00035e000) Stream added, broadcasting: 5\nI0520 12:20:58.230789 2811 log.go:172] (0xc00014e580) Reply frame received for 5\nI0520 12:20:58.347077 2811 log.go:172] (0xc00014e580) Data frame received for 5\nI0520 12:20:58.347109 2811 log.go:172] (0xc00035e000) (5) Data frame handling\nI0520 12:20:58.347138 2811 log.go:172] (0xc00014e580) Data frame received for 3\nI0520 12:20:58.347174 2811 log.go:172] (0xc0007c6c80) (3) Data frame handling\nI0520 12:20:58.347217 2811 log.go:172] (0xc0007c6c80) (3) Data frame sent\nI0520 12:20:58.347234 2811 log.go:172] (0xc00014e580) Data frame received for 3\nI0520 12:20:58.347257 2811 log.go:172] (0xc0007c6c80) (3) Data frame handling\nI0520 12:20:58.348655 2811 log.go:172] (0xc00014e580) Data frame received for 1\nI0520 12:20:58.348694 2811 log.go:172] (0xc0007225a0) (1) Data frame handling\nI0520 12:20:58.348717 2811 log.go:172] (0xc0007225a0) (1) Data frame sent\nI0520 12:20:58.348738 2811 log.go:172] (0xc00014e580) (0xc0007225a0) Stream removed, broadcasting: 1\nI0520 12:20:58.348755 2811 log.go:172] (0xc00014e580) Go away received\nI0520 12:20:58.348953 2811 log.go:172] (0xc00014e580) (0xc0007225a0) Stream removed, broadcasting: 1\nI0520 12:20:58.348964 2811 log.go:172] (0xc00014e580) (0xc0007c6c80) Stream removed, broadcasting: 3\nI0520 12:20:58.348988 2811 log.go:172] (0xc00014e580) (0xc00035e000) Stream removed, broadcasting: 5\n" May 20 12:20:58.353: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 20 12:20:58.353: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 20 12:20:58.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 20 12:20:58.556: INFO: stderr: "I0520 12:20:58.471362 2833 log.go:172] (0xc000138790) (0xc0006f4640) Create stream\nI0520 12:20:58.471438 2833 log.go:172] (0xc000138790) (0xc0006f4640) Stream added, broadcasting: 1\nI0520 12:20:58.474093 2833 log.go:172] (0xc000138790) Reply frame received for 1\nI0520 12:20:58.474123 2833 log.go:172] (0xc000138790) (0xc0007eedc0) Create stream\nI0520 12:20:58.474134 2833 log.go:172] (0xc000138790) (0xc0007eedc0) Stream added, broadcasting: 3\nI0520 12:20:58.475039 2833 log.go:172] (0xc000138790) Reply frame received for 3\nI0520 12:20:58.475087 2833 log.go:172] (0xc000138790) (0xc00036a000) Create stream\nI0520 12:20:58.475110 2833 log.go:172] (0xc000138790) (0xc00036a000) Stream added, broadcasting: 5\nI0520 12:20:58.475890 2833 log.go:172] (0xc000138790) Reply frame received for 5\nI0520 12:20:58.551596 2833 log.go:172] (0xc000138790) Data frame received for 5\nI0520 12:20:58.551646 2833 log.go:172] (0xc00036a000) (5) Data frame handling\nI0520 12:20:58.551677 2833 log.go:172] (0xc000138790) Data frame received for 3\nI0520 12:20:58.551700 2833 log.go:172] (0xc0007eedc0) (3) Data frame handling\nI0520 12:20:58.551726 2833 log.go:172] (0xc0007eedc0) (3) Data frame sent\nI0520 12:20:58.551950 2833 log.go:172] (0xc000138790) Data frame received for 3\nI0520 12:20:58.551963 2833 log.go:172] (0xc0007eedc0) (3) Data frame handling\nI0520 12:20:58.553347 2833 log.go:172] (0xc000138790) Data frame received for 1\nI0520 12:20:58.553357 2833 log.go:172] (0xc0006f4640) (1) Data frame handling\nI0520 12:20:58.553364 2833 log.go:172] (0xc0006f4640) (1) Data frame sent\nI0520 12:20:58.553371 2833 log.go:172] (0xc000138790) (0xc0006f4640) Stream removed, broadcasting: 1\nI0520 12:20:58.553482 2833 log.go:172] (0xc000138790) (0xc0006f4640) Stream removed, broadcasting: 1\nI0520 12:20:58.553501 2833 log.go:172] (0xc000138790) (0xc0007eedc0) Stream removed, broadcasting: 3\nI0520 12:20:58.553510 2833 log.go:172] (0xc000138790) (0xc00036a000) Stream removed, broadcasting: 5\n" May 20 12:20:58.556: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 20 12:20:58.556: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 20 12:20:58.556: INFO: Waiting for statefulset status.replicas updated to 0 May 20 12:20:58.559: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 20 12:21:08.568: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 20 12:21:08.568: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 20 12:21:08.568: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 20 12:21:08.598: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999944s May 20 12:21:09.602: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.977874844s May 20 12:21:10.608: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.972914779s May 20 12:21:11.612: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.967728715s May 20 12:21:12.616: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.96383485s May 20 12:21:13.621: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.959702285s May 20 12:21:14.624: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.954547595s May 20 12:21:15.629: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.950775183s May 20 12:21:16.635: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.946115232s May 20 12:21:17.640: INFO: Verifying statefulset ss doesn't scale past 3 for another 940.63293ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-pdjn2 May 20 12:21:18.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:21:18.901: INFO: stderr: "I0520 12:21:18.840170 2856 log.go:172] (0xc00015c630) (0xc0005e3400) Create stream\nI0520 12:21:18.840228 2856 log.go:172] (0xc00015c630) (0xc0005e3400) Stream added, broadcasting: 1\nI0520 12:21:18.842420 2856 log.go:172] (0xc00015c630) Reply frame received for 1\nI0520 12:21:18.842451 2856 log.go:172] (0xc00015c630) (0xc0005e34a0) Create stream\nI0520 12:21:18.842459 2856 log.go:172] (0xc00015c630) (0xc0005e34a0) Stream added, broadcasting: 3\nI0520 12:21:18.843357 2856 log.go:172] (0xc00015c630) Reply frame received for 3\nI0520 12:21:18.843447 2856 log.go:172] (0xc00015c630) (0xc00013a000) Create stream\nI0520 12:21:18.843483 2856 log.go:172] (0xc00015c630) (0xc00013a000) Stream added, broadcasting: 5\nI0520 12:21:18.844431 2856 log.go:172] (0xc00015c630) Reply frame received for 5\nI0520 12:21:18.894371 2856 log.go:172] (0xc00015c630) Data frame received for 5\nI0520 12:21:18.894425 2856 log.go:172] (0xc00013a000) (5) Data frame handling\nI0520 12:21:18.894454 2856 log.go:172] (0xc00015c630) Data frame received for 3\nI0520 12:21:18.894480 2856 log.go:172] (0xc0005e34a0) (3) Data frame handling\nI0520 12:21:18.894494 2856 log.go:172] (0xc0005e34a0) (3) Data frame sent\nI0520 12:21:18.894505 2856 log.go:172] (0xc00015c630) Data frame received for 3\nI0520 12:21:18.894512 2856 log.go:172] (0xc0005e34a0) (3) Data frame handling\nI0520 12:21:18.896046 2856 log.go:172] (0xc00015c630) Data frame received for 1\nI0520 12:21:18.896071 2856 log.go:172] (0xc0005e3400) (1) Data frame handling\nI0520 12:21:18.896107 2856 log.go:172] (0xc0005e3400) (1) Data frame sent\nI0520 12:21:18.896125 2856 log.go:172] (0xc00015c630) (0xc0005e3400) Stream removed, broadcasting: 1\nI0520 12:21:18.896356 2856 log.go:172] (0xc00015c630) Go away received\nI0520 12:21:18.896409 2856 log.go:172] (0xc00015c630) (0xc0005e3400) Stream removed, broadcasting: 1\nI0520 12:21:18.896433 2856 log.go:172] (0xc00015c630) (0xc0005e34a0) Stream removed, broadcasting: 3\nI0520 12:21:18.896444 2856 log.go:172] (0xc00015c630) (0xc00013a000) Stream removed, broadcasting: 5\n" May 20 12:21:18.901: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 20 12:21:18.901: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 20 12:21:18.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:21:19.117: INFO: stderr: "I0520 12:21:19.064652 2879 log.go:172] (0xc000138840) (0xc000675360) Create stream\nI0520 12:21:19.064720 2879 log.go:172] (0xc000138840) (0xc000675360) Stream added, broadcasting: 1\nI0520 12:21:19.067634 2879 log.go:172] (0xc000138840) Reply frame received for 1\nI0520 12:21:19.067697 2879 log.go:172] (0xc000138840) (0xc000675400) Create stream\nI0520 12:21:19.067720 2879 log.go:172] (0xc000138840) (0xc000675400) Stream added, broadcasting: 3\nI0520 12:21:19.068617 2879 log.go:172] (0xc000138840) Reply frame received for 3\nI0520 12:21:19.068647 2879 log.go:172] (0xc000138840) (0xc000672000) Create stream\nI0520 12:21:19.068658 2879 log.go:172] (0xc000138840) (0xc000672000) Stream added, broadcasting: 5\nI0520 12:21:19.070003 2879 log.go:172] (0xc000138840) Reply frame received for 5\nI0520 12:21:19.110601 2879 log.go:172] (0xc000138840) Data frame received for 3\nI0520 12:21:19.110639 2879 log.go:172] (0xc000675400) (3) Data frame handling\nI0520 12:21:19.110650 2879 log.go:172] (0xc000675400) (3) Data frame sent\nI0520 12:21:19.110658 2879 log.go:172] (0xc000138840) Data frame received for 3\nI0520 12:21:19.110670 2879 log.go:172] (0xc000138840) Data frame received for 5\nI0520 12:21:19.110681 2879 log.go:172] (0xc000672000) (5) Data frame handling\nI0520 12:21:19.110700 2879 log.go:172] (0xc000675400) (3) Data frame handling\nI0520 12:21:19.112298 2879 log.go:172] (0xc000138840) Data frame received for 1\nI0520 12:21:19.112320 2879 log.go:172] (0xc000675360) (1) Data frame handling\nI0520 12:21:19.112331 2879 log.go:172] (0xc000675360) (1) Data frame sent\nI0520 12:21:19.112343 2879 log.go:172] (0xc000138840) (0xc000675360) Stream removed, broadcasting: 1\nI0520 12:21:19.112363 2879 log.go:172] (0xc000138840) Go away received\nI0520 12:21:19.112641 2879 log.go:172] (0xc000138840) (0xc000675360) Stream removed, broadcasting: 1\nI0520 12:21:19.112677 2879 log.go:172] (0xc000138840) (0xc000675400) Stream removed, broadcasting: 3\nI0520 12:21:19.112703 2879 log.go:172] (0xc000138840) (0xc000672000) Stream removed, broadcasting: 5\n" May 20 12:21:19.117: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 20 12:21:19.117: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 20 12:21:19.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:21:20.146: INFO: rc: 1 May 20 12:21:20.146: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] I0520 12:21:19.455034 2902 log.go:172] (0xc000138840) (0xc000784640) Create stream I0520 12:21:19.455146 2902 log.go:172] (0xc000138840) (0xc000784640) Stream added, broadcasting: 1 I0520 12:21:19.457952 2902 log.go:172] (0xc000138840) Reply frame received for 1 I0520 12:21:19.457988 2902 log.go:172] (0xc000138840) (0xc0005e8c80) Create stream I0520 12:21:19.457995 2902 log.go:172] (0xc000138840) (0xc0005e8c80) Stream added, broadcasting: 3 I0520 12:21:19.458635 2902 log.go:172] (0xc000138840) Reply frame received for 3 I0520 12:21:19.458655 2902 log.go:172] (0xc000138840) (0xc0005e8dc0) Create stream I0520 12:21:19.458663 2902 log.go:172] (0xc000138840) (0xc0005e8dc0) Stream added, broadcasting: 5 I0520 12:21:19.459156 2902 log.go:172] (0xc000138840) Reply frame received for 5 I0520 12:21:20.141864 2902 log.go:172] (0xc000138840) Data frame received for 1 I0520 12:21:20.141930 2902 log.go:172] (0xc000138840) (0xc0005e8c80) Stream removed, broadcasting: 3 I0520 12:21:20.141977 2902 log.go:172] (0xc000784640) (1) Data frame handling I0520 12:21:20.142003 2902 log.go:172] (0xc000138840) (0xc0005e8dc0) Stream removed, broadcasting: 5 I0520 12:21:20.142066 2902 log.go:172] (0xc000784640) (1) Data frame sent I0520 12:21:20.142099 2902 log.go:172] (0xc000138840) (0xc000784640) Stream removed, broadcasting: 1 I0520 12:21:20.142121 2902 log.go:172] (0xc000138840) Go away received I0520 12:21:20.142450 2902 log.go:172] (0xc000138840) (0xc000784640) Stream removed, broadcasting: 1 I0520 12:21:20.142476 2902 log.go:172] (0xc000138840) (0xc0005e8c80) Stream removed, broadcasting: 3 I0520 12:21:20.142489 2902 log.go:172] (0xc000138840) (0xc0005e8dc0) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "71ed18c72a221b39d8eeb34ac3a22344a56da684d882b06ead9e594cf16289b5": task 7a54e7df69585f76e8f74ad7ad9aeab53480b22b134cd77ce2177cb1179b1363 not found: not found [] 0xc0008a1170 exit status 1 true [0xc0006c7790 0xc0006c77d8 0xc0006c77f8] [0xc0006c7790 0xc0006c77d8 0xc0006c77f8] [0xc0006c77c0 0xc0006c77e8] [0x935700 0x935700] 0xc002169500 }: Command stdout: stderr: I0520 12:21:19.455034 2902 log.go:172] (0xc000138840) (0xc000784640) Create stream I0520 12:21:19.455146 2902 log.go:172] (0xc000138840) (0xc000784640) Stream added, broadcasting: 1 I0520 12:21:19.457952 2902 log.go:172] (0xc000138840) Reply frame received for 1 I0520 12:21:19.457988 2902 log.go:172] (0xc000138840) (0xc0005e8c80) Create stream I0520 12:21:19.457995 2902 log.go:172] (0xc000138840) (0xc0005e8c80) Stream added, broadcasting: 3 I0520 12:21:19.458635 2902 log.go:172] (0xc000138840) Reply frame received for 3 I0520 12:21:19.458655 2902 log.go:172] (0xc000138840) (0xc0005e8dc0) Create stream I0520 12:21:19.458663 2902 log.go:172] (0xc000138840) (0xc0005e8dc0) Stream added, broadcasting: 5 I0520 12:21:19.459156 2902 log.go:172] (0xc000138840) Reply frame received for 5 I0520 12:21:20.141864 2902 log.go:172] (0xc000138840) Data frame received for 1 I0520 12:21:20.141930 2902 log.go:172] (0xc000138840) (0xc0005e8c80) Stream removed, broadcasting: 3 I0520 12:21:20.141977 2902 log.go:172] (0xc000784640) (1) Data frame handling I0520 12:21:20.142003 2902 log.go:172] (0xc000138840) (0xc0005e8dc0) Stream removed, broadcasting: 5 I0520 12:21:20.142066 2902 log.go:172] (0xc000784640) (1) Data frame sent I0520 12:21:20.142099 2902 log.go:172] (0xc000138840) (0xc000784640) Stream removed, broadcasting: 1 I0520 12:21:20.142121 2902 log.go:172] (0xc000138840) Go away received I0520 12:21:20.142450 2902 log.go:172] (0xc000138840) (0xc000784640) Stream removed, broadcasting: 1 I0520 12:21:20.142476 2902 log.go:172] (0xc000138840) (0xc0005e8c80) Stream removed, broadcasting: 3 I0520 12:21:20.142489 2902 log.go:172] (0xc000138840) (0xc0005e8dc0) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "71ed18c72a221b39d8eeb34ac3a22344a56da684d882b06ead9e594cf16289b5": task 7a54e7df69585f76e8f74ad7ad9aeab53480b22b134cd77ce2177cb1179b1363 not found: not found error: exit status 1 May 20 12:21:30.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:21:30.223: INFO: rc: 1 May 20 12:21:30.223: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0017b7e00 exit status 1 true [0xc001c74a38 0xc001c74a50 0xc001c74a68] [0xc001c74a38 0xc001c74a50 0xc001c74a68] [0xc001c74a48 0xc001c74a60] [0x935700 0x935700] 0xc002a9c7e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:21:40.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:21:40.317: INFO: rc: 1 May 20 12:21:40.317: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ea0120 exit status 1 true [0xc0004d4150 0xc0004d43f8 0xc0004d4690] [0xc0004d4150 0xc0004d43f8 0xc0004d4690] [0xc0004d41f0 0xc0004d45d8] [0x935700 0x935700] 0xc000c562a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:21:50.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:21:50.414: INFO: rc: 1 May 20 12:21:50.414: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00256c120 exit status 1 true [0xc00000e1a8 0xc0006c6170 0xc0006c61d0] [0xc00000e1a8 0xc0006c6170 0xc0006c61d0] [0xc00016e1b8 0xc0006c61c8] [0x935700 0x935700] 0xc0024561e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:22:00.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:22:00.490: INFO: rc: 1 May 20 12:22:00.490: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025b6120 exit status 1 true [0xc00041c030 0xc00041c0d0 0xc00041c1c0] [0xc00041c030 0xc00041c0d0 0xc00041c1c0] [0xc00041c0a8 0xc00041c1b0] [0x935700 0x935700] 0xc002acf020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:22:10.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:22:10.576: INFO: rc: 1 May 20 12:22:10.576: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ea0270 exit status 1 true [0xc0004d46f8 0xc0004d4930 0xc0004d4c10] [0xc0004d46f8 0xc0004d4930 0xc0004d4c10] [0xc0004d4860 0xc0004d4bd8] [0x935700 0x935700] 0xc000c565a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:22:20.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:22:20.666: INFO: rc: 1 May 20 12:22:20.666: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00256c270 exit status 1 true [0xc0006c61f8 0xc0006c6318 0xc0006c6460] [0xc0006c61f8 0xc0006c6318 0xc0006c6460] [0xc0006c6278 0xc0006c6418] [0x935700 0x935700] 0xc002456480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:22:30.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:22:30.792: INFO: rc: 1 May 20 12:22:30.792: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00256c3f0 exit status 1 true [0xc0006c6498 0xc0006c6528 0xc0006c6570] [0xc0006c6498 0xc0006c6528 0xc0006c6570] [0xc0006c6518 0xc0006c6558] [0x935700 0x935700] 0xc002456720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:22:40.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:22:40.875: INFO: rc: 1 May 20 12:22:40.875: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025b62a0 exit status 1 true [0xc00041c1d8 0xc00041c240 0xc00041c348] [0xc00041c1d8 0xc00041c240 0xc00041c348] [0xc00041c208 0xc00041c2b0] [0x935700 0x935700] 0xc002acfb60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:22:50.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:22:50.971: INFO: rc: 1 May 20 12:22:50.971: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025b63c0 exit status 1 true [0xc00041c350 0xc00041c388 0xc00041c460] [0xc00041c350 0xc00041c388 0xc00041c460] [0xc00041c380 0xc00041c410] [0x935700 0x935700] 0xc002acfe00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:23:00.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:23:01.059: INFO: rc: 1 May 20 12:23:01.060: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00256c510 exit status 1 true [0xc0006c6580 0xc0006c65a0 0xc0006c65d0] [0xc0006c6580 0xc0006c65a0 0xc0006c65d0] [0xc0006c6598 0xc0006c65b8] [0x935700 0x935700] 0xc0024569c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:23:11.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:23:11.147: INFO: rc: 1 May 20 12:23:11.147: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ea03f0 exit status 1 true [0xc0004d4c80 0xc0004d4d40 0xc0004d4f08] [0xc0004d4c80 0xc0004d4d40 0xc0004d4f08] [0xc0004d4d18 0xc0004d4e18] [0x935700 0x935700] 0xc000c56ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:23:21.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:23:21.238: INFO: rc: 1 May 20 12:23:21.238: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00256c690 exit status 1 true [0xc0006c65f0 0xc0006c6628 0xc0006c6670] [0xc0006c65f0 0xc0006c6628 0xc0006c6670] [0xc0006c6618 0xc0006c6658] [0x935700 0x935700] 0xc002456c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:23:31.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:23:31.326: INFO: rc: 1 May 20 12:23:31.326: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025b6570 exit status 1 true [0xc00041c468 0xc00041c4c8 0xc00041c538] [0xc00041c468 0xc00041c4c8 0xc00041c538] [0xc00041c4a8 0xc00041c4f8] [0x935700 0x935700] 0xc002324120 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:23:41.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:23:41.415: INFO: rc: 1 May 20 12:23:41.415: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025b6150 exit status 1 true [0xc00016e1b8 0xc00041c030 0xc00041c0d0] [0xc00016e1b8 0xc00041c030 0xc00041c0d0] [0xc00041c010 0xc00041c0a8] [0x935700 0x935700] 0xc002acef60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:23:51.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:23:51.505: INFO: rc: 1 May 20 12:23:51.505: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00256c150 exit status 1 true [0xc0006c6170 0xc0006c61d0 0xc0006c6278] [0xc0006c6170 0xc0006c61d0 0xc0006c6278] [0xc0006c61c8 0xc0006c6210] [0x935700 0x935700] 0xc0023242a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:24:01.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:24:01.595: INFO: rc: 1 May 20 12:24:01.595: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025b6270 exit status 1 true [0xc00041c140 0xc00041c1d8 0xc00041c240] [0xc00041c140 0xc00041c1d8 0xc00041c240] [0xc00041c1c0 0xc00041c208] [0x935700 0x935700] 0xc002acfaa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:24:11.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:24:11.687: INFO: rc: 1 May 20 12:24:11.687: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025b63f0 exit status 1 true [0xc00041c260 0xc00041c350 0xc00041c388] [0xc00041c260 0xc00041c350 0xc00041c388] [0xc00041c348 0xc00041c380] [0x935700 0x935700] 0xc002acfd40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:24:21.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:24:21.793: INFO: rc: 1 May 20 12:24:21.793: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001cea150 exit status 1 true [0xc0004d4150 0xc0004d43f8 0xc0004d4690] [0xc0004d4150 0xc0004d43f8 0xc0004d4690] [0xc0004d41f0 0xc0004d45d8] [0x935700 0x935700] 0xc0024561e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:24:31.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:24:31.878: INFO: rc: 1 May 20 12:24:31.878: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025b65a0 exit status 1 true [0xc00041c3f8 0xc00041c468 0xc00041c4c8] [0xc00041c3f8 0xc00041c468 0xc00041c4c8] [0xc00041c460 0xc00041c4a8] [0x935700 0x935700] 0xc000c56120 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:24:41.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:24:41.961: INFO: rc: 1 May 20 12:24:41.961: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ea0150 exit status 1 true [0xc001aa4010 0xc001aa4028 0xc001aa4070] [0xc001aa4010 0xc001aa4028 0xc001aa4070] [0xc001aa4020 0xc001aa4068] [0x935700 0x935700] 0xc002604300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:24:51.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:24:52.051: INFO: rc: 1 May 20 12:24:52.051: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025b71d0 exit status 1 true [0xc00041c4f0 0xc00041c540 0xc00041c578] [0xc00041c4f0 0xc00041c540 0xc00041c578] [0xc00041c538 0xc00041c558] [0x935700 0x935700] 0xc000c56420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:25:02.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:25:02.132: INFO: rc: 1 May 20 12:25:02.132: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ea02d0 exit status 1 true [0xc001aa4090 0xc001aa4100 0xc001aa4140] [0xc001aa4090 0xc001aa4100 0xc001aa4140] [0xc001aa40d8 0xc001aa4118] [0x935700 0x935700] 0xc0026045a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:25:12.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:25:12.243: INFO: rc: 1 May 20 12:25:12.243: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001cea2a0 exit status 1 true [0xc0004d46f8 0xc0004d4930 0xc0004d4c10] [0xc0004d46f8 0xc0004d4930 0xc0004d4c10] [0xc0004d4860 0xc0004d4bd8] [0x935700 0x935700] 0xc002456480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:25:22.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:25:22.336: INFO: rc: 1 May 20 12:25:22.336: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001cea3f0 exit status 1 true [0xc0004d4c80 0xc0004d4d40 0xc0004d4f08] [0xc0004d4c80 0xc0004d4d40 0xc0004d4f08] [0xc0004d4d18 0xc0004d4e18] [0x935700 0x935700] 0xc002456720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:25:32.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:25:32.438: INFO: rc: 1 May 20 12:25:32.438: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ea0420 exit status 1 true [0xc001aa4158 0xc001aa4170 0xc001aa41c8] [0xc001aa4158 0xc001aa4170 0xc001aa41c8] [0xc001aa4168 0xc001aa41a8] [0x935700 0x935700] 0xc002604840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:25:42.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:25:42.529: INFO: rc: 1 May 20 12:25:42.529: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025b6120 exit status 1 true [0xc00016e000 0xc00041c030 0xc00041c0d0] [0xc00016e000 0xc00041c030 0xc00041c0d0] [0xc00041c010 0xc00041c0a8] [0x935700 0x935700] 0xc002acf020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:25:52.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:25:52.618: INFO: rc: 1 May 20 12:25:52.618: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025b62a0 exit status 1 true [0xc00041c140 0xc00041c1d8 0xc00041c240] [0xc00041c140 0xc00041c1d8 0xc00041c240] [0xc00041c1c0 0xc00041c208] [0x935700 0x935700] 0xc002acfb60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:26:02.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:26:02.707: INFO: rc: 1 May 20 12:26:02.707: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ea0120 exit status 1 true [0xc0004d4150 0xc0004d43f8 0xc0004d4690] [0xc0004d4150 0xc0004d43f8 0xc0004d4690] [0xc0004d41f0 0xc0004d45d8] [0x935700 0x935700] 0xc000c56180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:26:12.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:26:12.801: INFO: rc: 1 May 20 12:26:12.801: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ea0270 exit status 1 true [0xc0004d46f8 0xc0004d4930 0xc0004d4c10] [0xc0004d46f8 0xc0004d4930 0xc0004d4c10] [0xc0004d4860 0xc0004d4bd8] [0x935700 0x935700] 0xc000c56480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 20 12:26:22.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pdjn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 20 12:26:22.902: INFO: rc: 1 May 20 12:26:22.902: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: May 20 12:26:22.902: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 20 12:26:22.911: INFO: Deleting all statefulset in ns e2e-tests-statefulset-pdjn2 May 20 12:26:22.913: INFO: Scaling statefulset ss to 0 May 20 12:26:22.920: INFO: Waiting for statefulset status.replicas updated to 0 May 20 12:26:22.922: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:26:22.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-pdjn2" for this suite. May 20 12:26:29.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:26:29.052: INFO: namespace: e2e-tests-statefulset-pdjn2, resource: bindings, ignored listing per whitelist May 20 12:26:29.106: INFO: namespace e2e-tests-statefulset-pdjn2 deletion completed in 6.140985208s • [SLOW TEST:372.003 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:26:29.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-218f84eb-9a95-11ea-b520-0242ac110018 STEP: Creating a pod to test consume configMaps May 20 12:26:29.229: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2191a50f-9a95-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-9lw6t" to be "success or failure" May 20 12:26:29.242: INFO: Pod "pod-projected-configmaps-2191a50f-9a95-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.823638ms May 20 12:26:31.246: INFO: Pod "pod-projected-configmaps-2191a50f-9a95-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016837798s May 20 12:26:33.274: INFO: Pod "pod-projected-configmaps-2191a50f-9a95-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.044499524s May 20 12:26:35.278: INFO: Pod "pod-projected-configmaps-2191a50f-9a95-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049006566s STEP: Saw pod success May 20 12:26:35.278: INFO: Pod "pod-projected-configmaps-2191a50f-9a95-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:26:35.281: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-2191a50f-9a95-11ea-b520-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 20 12:26:35.300: INFO: Waiting for pod pod-projected-configmaps-2191a50f-9a95-11ea-b520-0242ac110018 to disappear May 20 12:26:35.305: INFO: Pod pod-projected-configmaps-2191a50f-9a95-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:26:35.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9lw6t" for this suite. May 20 12:26:41.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:26:41.371: INFO: namespace: e2e-tests-projected-9lw6t, resource: bindings, ignored listing per whitelist May 20 12:26:41.410: INFO: namespace e2e-tests-projected-9lw6t deletion completed in 6.102609614s • [SLOW TEST:12.304 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:26:41.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 20 12:26:41.549: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28e873f6-9a95-11ea-b520-0242ac110018" in namespace "e2e-tests-downward-api-h9w7h" to be "success or failure" May 20 12:26:41.575: INFO: Pod "downwardapi-volume-28e873f6-9a95-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 25.891053ms May 20 12:26:43.665: INFO: Pod "downwardapi-volume-28e873f6-9a95-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116247919s May 20 12:26:45.670: INFO: Pod "downwardapi-volume-28e873f6-9a95-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.120682808s May 20 12:26:47.674: INFO: Pod "downwardapi-volume-28e873f6-9a95-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.124697446s STEP: Saw pod success May 20 12:26:47.674: INFO: Pod "downwardapi-volume-28e873f6-9a95-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:26:47.676: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-28e873f6-9a95-11ea-b520-0242ac110018 container client-container: STEP: delete the pod May 20 12:26:47.702: INFO: Waiting for pod downwardapi-volume-28e873f6-9a95-11ea-b520-0242ac110018 to disappear May 20 12:26:47.707: INFO: Pod downwardapi-volume-28e873f6-9a95-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:26:47.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-h9w7h" for this suite. May 20 12:26:53.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:26:53.765: INFO: namespace: e2e-tests-downward-api-h9w7h, resource: bindings, ignored listing per whitelist May 20 12:26:53.823: INFO: namespace e2e-tests-downward-api-h9w7h deletion completed in 6.112680208s • [SLOW TEST:12.412 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:26:53.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-gtcp STEP: Creating a pod to test atomic-volume-subpath May 20 12:26:53.942: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gtcp" in namespace "e2e-tests-subpath-v4sr7" to be "success or failure" May 20 12:26:53.946: INFO: Pod "pod-subpath-test-configmap-gtcp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436928ms May 20 12:26:55.971: INFO: Pod "pod-subpath-test-configmap-gtcp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029116629s May 20 12:26:57.975: INFO: Pod "pod-subpath-test-configmap-gtcp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033654101s May 20 12:26:59.980: INFO: Pod "pod-subpath-test-configmap-gtcp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038678831s May 20 12:27:01.983: INFO: Pod "pod-subpath-test-configmap-gtcp": Phase="Running", Reason="", readiness=false. Elapsed: 8.041805291s May 20 12:27:03.988: INFO: Pod "pod-subpath-test-configmap-gtcp": Phase="Running", Reason="", readiness=false. Elapsed: 10.046208092s May 20 12:27:05.993: INFO: Pod "pod-subpath-test-configmap-gtcp": Phase="Running", Reason="", readiness=false. Elapsed: 12.051584548s May 20 12:27:07.998: INFO: Pod "pod-subpath-test-configmap-gtcp": Phase="Running", Reason="", readiness=false. Elapsed: 14.055877403s May 20 12:27:10.002: INFO: Pod "pod-subpath-test-configmap-gtcp": Phase="Running", Reason="", readiness=false. Elapsed: 16.06070529s May 20 12:27:12.007: INFO: Pod "pod-subpath-test-configmap-gtcp": Phase="Running", Reason="", readiness=false. Elapsed: 18.065252277s May 20 12:27:14.011: INFO: Pod "pod-subpath-test-configmap-gtcp": Phase="Running", Reason="", readiness=false. Elapsed: 20.069285709s May 20 12:27:16.015: INFO: Pod "pod-subpath-test-configmap-gtcp": Phase="Running", Reason="", readiness=false. Elapsed: 22.073397241s May 20 12:27:18.019: INFO: Pod "pod-subpath-test-configmap-gtcp": Phase="Running", Reason="", readiness=false. Elapsed: 24.077809291s May 20 12:27:20.024: INFO: Pod "pod-subpath-test-configmap-gtcp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.081901388s STEP: Saw pod success May 20 12:27:20.024: INFO: Pod "pod-subpath-test-configmap-gtcp" satisfied condition "success or failure" May 20 12:27:20.026: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-gtcp container test-container-subpath-configmap-gtcp: STEP: delete the pod May 20 12:27:20.076: INFO: Waiting for pod pod-subpath-test-configmap-gtcp to disappear May 20 12:27:20.211: INFO: Pod pod-subpath-test-configmap-gtcp no longer exists STEP: Deleting pod pod-subpath-test-configmap-gtcp May 20 12:27:20.211: INFO: Deleting pod "pod-subpath-test-configmap-gtcp" in namespace "e2e-tests-subpath-v4sr7" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:27:20.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-v4sr7" for this suite. May 20 12:27:26.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:27:26.266: INFO: namespace: e2e-tests-subpath-v4sr7, resource: bindings, ignored listing per whitelist May 20 12:27:26.312: INFO: namespace e2e-tests-subpath-v4sr7 deletion completed in 6.094454742s • [SLOW TEST:32.489 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:27:26.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-9sf6x.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-9sf6x.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-9sf6x.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-9sf6x.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-9sf6x.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-9sf6x.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 20 12:27:32.534: INFO: DNS probes using e2e-tests-dns-9sf6x/dns-test-43a438ed-9a95-11ea-b520-0242ac110018 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:27:32.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-9sf6x" for this suite. May 20 12:27:40.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:27:40.713: INFO: namespace: e2e-tests-dns-9sf6x, resource: bindings, ignored listing per whitelist May 20 12:27:40.735: INFO: namespace e2e-tests-dns-9sf6x deletion completed in 8.128527645s • [SLOW TEST:14.423 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:27:40.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-4c3ebbb0-9a95-11ea-b520-0242ac110018 STEP: Creating a pod to test consume secrets May 20 12:27:40.837: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4c40135a-9a95-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-qgbt2" to be "success or failure" May 20 12:27:40.841: INFO: Pod "pod-projected-secrets-4c40135a-9a95-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.606637ms May 20 12:27:42.990: INFO: Pod "pod-projected-secrets-4c40135a-9a95-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152169002s May 20 12:27:45.019: INFO: Pod "pod-projected-secrets-4c40135a-9a95-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.181580398s STEP: Saw pod success May 20 12:27:45.019: INFO: Pod "pod-projected-secrets-4c40135a-9a95-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:27:45.045: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-4c40135a-9a95-11ea-b520-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 20 12:27:45.064: INFO: Waiting for pod pod-projected-secrets-4c40135a-9a95-11ea-b520-0242ac110018 to disappear May 20 12:27:45.087: INFO: Pod pod-projected-secrets-4c40135a-9a95-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:27:45.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qgbt2" for this suite. May 20 12:27:51.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:27:51.409: INFO: namespace: e2e-tests-projected-qgbt2, resource: bindings, ignored listing per whitelist May 20 12:27:51.429: INFO: namespace e2e-tests-projected-qgbt2 deletion completed in 6.339188329s • [SLOW TEST:10.693 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:27:51.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info May 20 12:27:51.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 20 12:27:54.238: INFO: stderr: "" May 20 12:27:54.238: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:27:54.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8dbkf" for this suite. May 20 12:28:00.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:28:00.291: INFO: namespace: e2e-tests-kubectl-8dbkf, resource: bindings, ignored listing per whitelist May 20 12:28:00.326: INFO: namespace e2e-tests-kubectl-8dbkf deletion completed in 6.085194149s • [SLOW TEST:8.897 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:28:00.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-l7g5 STEP: Creating a pod to test atomic-volume-subpath May 20 12:28:00.447: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-l7g5" in namespace "e2e-tests-subpath-7dmw6" to be "success or failure" May 20 12:28:00.451: INFO: Pod "pod-subpath-test-downwardapi-l7g5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163069ms May 20 12:28:02.703: INFO: Pod "pod-subpath-test-downwardapi-l7g5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.256287504s May 20 12:28:04.707: INFO: Pod "pod-subpath-test-downwardapi-l7g5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.259682126s May 20 12:28:06.727: INFO: Pod "pod-subpath-test-downwardapi-l7g5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.280022097s May 20 12:28:08.780: INFO: Pod "pod-subpath-test-downwardapi-l7g5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.332630918s May 20 12:28:10.784: INFO: Pod "pod-subpath-test-downwardapi-l7g5": Phase="Running", Reason="", readiness=false. Elapsed: 10.336600702s May 20 12:28:12.788: INFO: Pod "pod-subpath-test-downwardapi-l7g5": Phase="Running", Reason="", readiness=false. Elapsed: 12.341136825s May 20 12:28:14.792: INFO: Pod "pod-subpath-test-downwardapi-l7g5": Phase="Running", Reason="", readiness=false. Elapsed: 14.34469467s May 20 12:28:16.795: INFO: Pod "pod-subpath-test-downwardapi-l7g5": Phase="Running", Reason="", readiness=false. Elapsed: 16.348022967s May 20 12:28:18.798: INFO: Pod "pod-subpath-test-downwardapi-l7g5": Phase="Running", Reason="", readiness=false. Elapsed: 18.351014093s May 20 12:28:20.801: INFO: Pod "pod-subpath-test-downwardapi-l7g5": Phase="Running", Reason="", readiness=false. Elapsed: 20.354358063s May 20 12:28:22.805: INFO: Pod "pod-subpath-test-downwardapi-l7g5": Phase="Running", Reason="", readiness=false. Elapsed: 22.358227986s May 20 12:28:24.810: INFO: Pod "pod-subpath-test-downwardapi-l7g5": Phase="Running", Reason="", readiness=false. Elapsed: 24.362790897s May 20 12:28:26.814: INFO: Pod "pod-subpath-test-downwardapi-l7g5": Phase="Running", Reason="", readiness=false. Elapsed: 26.367031325s May 20 12:28:28.817: INFO: Pod "pod-subpath-test-downwardapi-l7g5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.369808904s STEP: Saw pod success May 20 12:28:28.817: INFO: Pod "pod-subpath-test-downwardapi-l7g5" satisfied condition "success or failure" May 20 12:28:28.819: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-l7g5 container test-container-subpath-downwardapi-l7g5: STEP: delete the pod May 20 12:28:28.854: INFO: Waiting for pod pod-subpath-test-downwardapi-l7g5 to disappear May 20 12:28:28.864: INFO: Pod pod-subpath-test-downwardapi-l7g5 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-l7g5 May 20 12:28:28.864: INFO: Deleting pod "pod-subpath-test-downwardapi-l7g5" in namespace "e2e-tests-subpath-7dmw6" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:28:28.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-7dmw6" for this suite. May 20 12:28:34.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:28:34.911: INFO: namespace: e2e-tests-subpath-7dmw6, resource: bindings, ignored listing per whitelist May 20 12:28:34.943: INFO: namespace e2e-tests-subpath-7dmw6 deletion completed in 6.072884235s • [SLOW TEST:34.616 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:28:34.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-6c8d60fa-9a95-11ea-b520-0242ac110018 STEP: Creating a pod to test consume secrets May 20 12:28:35.045: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6c8edbc9-9a95-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-njl95" to be "success or failure" May 20 12:28:35.060: INFO: Pod "pod-projected-secrets-6c8edbc9-9a95-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.199307ms May 20 12:28:37.063: INFO: Pod "pod-projected-secrets-6c8edbc9-9a95-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018526489s May 20 12:28:39.068: INFO: Pod "pod-projected-secrets-6c8edbc9-9a95-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023635327s STEP: Saw pod success May 20 12:28:39.069: INFO: Pod "pod-projected-secrets-6c8edbc9-9a95-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:28:39.071: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-6c8edbc9-9a95-11ea-b520-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 20 12:28:39.099: INFO: Waiting for pod pod-projected-secrets-6c8edbc9-9a95-11ea-b520-0242ac110018 to disappear May 20 12:28:39.104: INFO: Pod pod-projected-secrets-6c8edbc9-9a95-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:28:39.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-njl95" for this suite. May 20 12:28:47.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:28:47.234: INFO: namespace: e2e-tests-projected-njl95, resource: bindings, ignored listing per whitelist May 20 12:28:47.417: INFO: namespace e2e-tests-projected-njl95 deletion completed in 8.309707927s • [SLOW TEST:12.474 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:28:47.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-73fbca6b-9a95-11ea-b520-0242ac110018 STEP: Creating a pod to test consume secrets May 20 12:28:47.503: INFO: Waiting up to 5m0s for pod "pod-secrets-73fc83d2-9a95-11ea-b520-0242ac110018" in namespace "e2e-tests-secrets-hbbgq" to be "success or failure" May 20 12:28:47.507: INFO: Pod "pod-secrets-73fc83d2-9a95-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060076ms May 20 12:28:49.637: INFO: Pod "pod-secrets-73fc83d2-9a95-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134159497s May 20 12:28:51.642: INFO: Pod "pod-secrets-73fc83d2-9a95-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.138975973s May 20 12:28:53.647: INFO: Pod "pod-secrets-73fc83d2-9a95-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.143941792s STEP: Saw pod success May 20 12:28:53.647: INFO: Pod "pod-secrets-73fc83d2-9a95-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:28:53.650: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-73fc83d2-9a95-11ea-b520-0242ac110018 container secret-volume-test: STEP: delete the pod May 20 12:28:53.670: INFO: Waiting for pod pod-secrets-73fc83d2-9a95-11ea-b520-0242ac110018 to disappear May 20 12:28:53.675: INFO: Pod pod-secrets-73fc83d2-9a95-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:28:53.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-hbbgq" for this suite. May 20 12:28:59.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:28:59.754: INFO: namespace: e2e-tests-secrets-hbbgq, resource: bindings, ignored listing per whitelist May 20 12:28:59.780: INFO: namespace e2e-tests-secrets-hbbgq deletion completed in 6.101834582s • [SLOW TEST:12.363 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:28:59.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 20 12:28:59.895: INFO: Waiting up to 5m0s for pod "pod-7b5e2cfb-9a95-11ea-b520-0242ac110018" in namespace "e2e-tests-emptydir-cstdw" to be "success or failure" May 20 12:28:59.907: INFO: Pod "pod-7b5e2cfb-9a95-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.05015ms May 20 12:29:01.912: INFO: Pod "pod-7b5e2cfb-9a95-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016489102s May 20 12:29:03.916: INFO: Pod "pod-7b5e2cfb-9a95-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020571583s STEP: Saw pod success May 20 12:29:03.916: INFO: Pod "pod-7b5e2cfb-9a95-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:29:03.918: INFO: Trying to get logs from node hunter-worker pod pod-7b5e2cfb-9a95-11ea-b520-0242ac110018 container test-container: STEP: delete the pod May 20 12:29:03.991: INFO: Waiting for pod pod-7b5e2cfb-9a95-11ea-b520-0242ac110018 to disappear May 20 12:29:03.995: INFO: Pod pod-7b5e2cfb-9a95-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:29:03.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-cstdw" for this suite. May 20 12:29:10.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:29:10.027: INFO: namespace: e2e-tests-emptydir-cstdw, resource: bindings, ignored listing per whitelist May 20 12:29:10.095: INFO: namespace e2e-tests-emptydir-cstdw deletion completed in 6.096585301s • [SLOW TEST:10.315 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:29:10.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-ln822 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-ln822 STEP: Deleting pre-stop pod May 20 12:29:23.259: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:29:23.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-ln822" for this suite. May 20 12:30:03.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:30:03.319: INFO: namespace: e2e-tests-prestop-ln822, resource: bindings, ignored listing per whitelist May 20 12:30:03.367: INFO: namespace e2e-tests-prestop-ln822 deletion completed in 40.091314958s • [SLOW TEST:53.271 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:30:03.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-a145dea6-9a95-11ea-b520-0242ac110018 STEP: Creating a pod to test consume configMaps May 20 12:30:03.493: INFO: Waiting up to 5m0s for pod "pod-configmaps-a147c407-9a95-11ea-b520-0242ac110018" in namespace "e2e-tests-configmap-gmsz8" to be "success or failure" May 20 12:30:03.509: INFO: Pod "pod-configmaps-a147c407-9a95-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.218993ms May 20 12:30:05.572: INFO: Pod "pod-configmaps-a147c407-9a95-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078862562s May 20 12:30:07.576: INFO: Pod "pod-configmaps-a147c407-9a95-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.082398253s May 20 12:30:09.580: INFO: Pod "pod-configmaps-a147c407-9a95-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.086789731s STEP: Saw pod success May 20 12:30:09.580: INFO: Pod "pod-configmaps-a147c407-9a95-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:30:09.584: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-a147c407-9a95-11ea-b520-0242ac110018 container configmap-volume-test: STEP: delete the pod May 20 12:30:09.646: INFO: Waiting for pod pod-configmaps-a147c407-9a95-11ea-b520-0242ac110018 to disappear May 20 12:30:09.653: INFO: Pod pod-configmaps-a147c407-9a95-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:30:09.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-gmsz8" for this suite. May 20 12:30:15.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:30:15.729: INFO: namespace: e2e-tests-configmap-gmsz8, resource: bindings, ignored listing per whitelist May 20 12:30:15.752: INFO: namespace e2e-tests-configmap-gmsz8 deletion completed in 6.095410954s • [SLOW TEST:12.385 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:30:15.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-a8a2e6fc-9a95-11ea-b520-0242ac110018 STEP: Creating a pod to test consume secrets May 20 12:30:15.845: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a8a37308-9a95-11ea-b520-0242ac110018" in namespace "e2e-tests-projected-mr5sb" to be "success or failure" May 20 12:30:15.862: INFO: Pod "pod-projected-secrets-a8a37308-9a95-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.70315ms May 20 12:30:17.865: INFO: Pod "pod-projected-secrets-a8a37308-9a95-11ea-b520-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020560208s May 20 12:30:19.869: INFO: Pod "pod-projected-secrets-a8a37308-9a95-11ea-b520-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.02406576s May 20 12:30:21.873: INFO: Pod "pod-projected-secrets-a8a37308-9a95-11ea-b520-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028205581s STEP: Saw pod success May 20 12:30:21.873: INFO: Pod "pod-projected-secrets-a8a37308-9a95-11ea-b520-0242ac110018" satisfied condition "success or failure" May 20 12:30:21.876: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-a8a37308-9a95-11ea-b520-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 20 12:30:21.928: INFO: Waiting for pod pod-projected-secrets-a8a37308-9a95-11ea-b520-0242ac110018 to disappear May 20 12:30:21.932: INFO: Pod pod-projected-secrets-a8a37308-9a95-11ea-b520-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:30:21.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mr5sb" for this suite. May 20 12:30:27.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:30:27.970: INFO: namespace: e2e-tests-projected-mr5sb, resource: bindings, ignored listing per whitelist May 20 12:30:28.028: INFO: namespace e2e-tests-projected-mr5sb deletion completed in 6.091393407s • [SLOW TEST:12.276 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:30:28.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0520 12:30:58.694408 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 20 12:30:58.694: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:30:58.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-qwrbf" for this suite. May 20 12:31:04.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:31:04.755: INFO: namespace: e2e-tests-gc-qwrbf, resource: bindings, ignored listing per whitelist May 20 12:31:04.915: INFO: namespace e2e-tests-gc-qwrbf deletion completed in 6.217966543s • [SLOW TEST:36.887 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:31:04.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 20 12:31:05.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' May 20 12:31:05.233: INFO: stderr: "" May 20 12:31:05.233: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" May 20 12:31:05.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7s7tl' May 20 12:31:05.500: INFO: stderr: "" May 20 12:31:05.500: INFO: stdout: "replicationcontroller/redis-master created\n" May 20 12:31:05.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7s7tl' May 20 12:31:05.804: INFO: stderr: "" May 20 12:31:05.804: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 20 12:31:06.809: INFO: Selector matched 1 pods for map[app:redis] May 20 12:31:06.809: INFO: Found 0 / 1 May 20 12:31:07.808: INFO: Selector matched 1 pods for map[app:redis] May 20 12:31:07.808: INFO: Found 0 / 1 May 20 12:31:08.812: INFO: Selector matched 1 pods for map[app:redis] May 20 12:31:08.812: INFO: Found 0 / 1 May 20 12:31:09.809: INFO: Selector matched 1 pods for map[app:redis] May 20 12:31:09.809: INFO: Found 1 / 1 May 20 12:31:09.809: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 20 12:31:09.813: INFO: Selector matched 1 pods for map[app:redis] May 20 12:31:09.813: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 20 12:31:09.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-l4mb2 --namespace=e2e-tests-kubectl-7s7tl' May 20 12:31:09.935: INFO: stderr: "" May 20 12:31:09.935: INFO: stdout: "Name: redis-master-l4mb2\nNamespace: e2e-tests-kubectl-7s7tl\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.17.0.4\nStart Time: Wed, 20 May 2020 12:31:05 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.222\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://7d067ff0f1a117eb99ae7e63b6be4135cb5452143e079c69a5216b1d52e47f17\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 20 May 2020 12:31:08 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-t4gtn (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-t4gtn:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-t4gtn\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned e2e-tests-kubectl-7s7tl/redis-master-l4mb2 to hunter-worker2\n Normal Pulled 3s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, hunter-worker2 Created container\n Normal Started 1s kubelet, hunter-worker2 Started container\n" May 20 12:31:09.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-7s7tl' May 20 12:31:10.065: INFO: stderr: "" May 20 12:31:10.065: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-7s7tl\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-l4mb2\n" May 20 12:31:10.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-7s7tl' May 20 12:31:10.191: INFO: stderr: "" May 20 12:31:10.191: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-7s7tl\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.99.49.20\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.222:6379\nSession Affinity: None\nEvents: \n" May 20 12:31:10.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' May 20 12:31:10.336: INFO: stderr: "" May 20 12:31:10.336: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 20 May 2020 12:31:02 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 20 May 2020 12:31:02 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 20 May 2020 12:31:02 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 20 May 2020 12:31:02 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 65d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 65d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 65d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 65d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 65d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 65d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 65d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 20 12:31:10.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-7s7tl' May 20 12:31:10.458: INFO: stderr: "" May 20 12:31:10.458: INFO: stdout: "Name: e2e-tests-kubectl-7s7tl\nLabels: e2e-framework=kubectl\n e2e-run=37a9bd49-9a87-11ea-b520-0242ac110018\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:31:10.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7s7tl" for this suite. May 20 12:31:34.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:31:34.508: INFO: namespace: e2e-tests-kubectl-7s7tl, resource: bindings, ignored listing per whitelist May 20 12:31:34.543: INFO: namespace e2e-tests-kubectl-7s7tl deletion completed in 24.081489465s • [SLOW TEST:29.628 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:31:34.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 20 12:32:00.687: INFO: Container started at 2020-05-20 12:31:37 +0000 UTC, pod became ready at 2020-05-20 12:31:59 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:32:00.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-bktkr" for this suite. May 20 12:32:22.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:32:22.725: INFO: namespace: e2e-tests-container-probe-bktkr, resource: bindings, ignored listing per whitelist May 20 12:32:22.770: INFO: namespace e2e-tests-container-probe-bktkr deletion completed in 22.078521784s • [SLOW TEST:48.227 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:32:22.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 20 12:32:22.872: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 20 12:32:27.876: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 20 12:32:27.876: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 20 12:32:27.899: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-wqxc7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wqxc7/deployments/test-cleanup-deployment,UID:f758f095-9a95-11ea-99e8-0242ac110002,ResourceVersion:11580048,Generation:1,CreationTimestamp:2020-05-20 12:32:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 20 12:32:27.905: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. May 20 12:32:27.905: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 20 12:32:27.905: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-wqxc7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wqxc7/replicasets/test-cleanup-controller,UID:f45a66b8-9a95-11ea-99e8-0242ac110002,ResourceVersion:11580049,Generation:1,CreationTimestamp:2020-05-20 12:32:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment f758f095-9a95-11ea-99e8-0242ac110002 0xc000fa2b47 0xc000fa2b48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 20 12:32:27.982: INFO: Pod "test-cleanup-controller-qxtgh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-qxtgh,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-wqxc7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wqxc7/pods/test-cleanup-controller-qxtgh,UID:f45cd889-9a95-11ea-99e8-0242ac110002,ResourceVersion:11580043,Generation:0,CreationTimestamp:2020-05-20 12:32:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller f45a66b8-9a95-11ea-99e8-0242ac110002 0xc0009a4e2f 0xc0009a4fd0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xbzjp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xbzjp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xbzjp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009a5060} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009a5170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:32:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:32:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:32:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:32:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.223,StartTime:2020-05-20 12:32:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-20 12:32:25 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5df90aea1b080a0abb866fdbfa705e59c6fd51bfed35ec8a3ff5bdaa0ac125ca}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:32:27.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-wqxc7" for this suite. May 20 12:32:34.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:32:34.090: INFO: namespace: e2e-tests-deployment-wqxc7, resource: bindings, ignored listing per whitelist May 20 12:32:34.144: INFO: namespace e2e-tests-deployment-wqxc7 deletion completed in 6.146972938s • [SLOW TEST:11.373 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:32:34.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-fb257f83-9a95-11ea-b520-0242ac110018 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:32:40.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4fjcm" for this suite. May 20 12:33:02.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:33:02.402: INFO: namespace: e2e-tests-configmap-4fjcm, resource: bindings, ignored listing per whitelist May 20 12:33:02.404: INFO: namespace e2e-tests-configmap-4fjcm deletion completed in 22.104296143s • [SLOW TEST:28.261 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:33:02.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 20 12:33:02.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-t9xqv' May 20 12:33:02.614: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 20 12:33:02.614: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 May 20 12:33:06.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-t9xqv' May 20 12:33:06.775: INFO: stderr: "" May 20 12:33:06.775: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:33:06.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-t9xqv" for this suite. May 20 12:33:34.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:33:34.805: INFO: namespace: e2e-tests-kubectl-t9xqv, resource: bindings, ignored listing per whitelist May 20 12:33:34.860: INFO: namespace e2e-tests-kubectl-t9xqv deletion completed in 28.080516262s • [SLOW TEST:32.455 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:33:34.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 20 12:33:34.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-nsmqw' May 20 12:33:35.071: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 20 12:33:35.071: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 May 20 12:33:37.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-nsmqw' May 20 12:33:37.367: INFO: stderr: "" May 20 12:33:37.367: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:33:37.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nsmqw" for this suite. May 20 12:33:43.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:33:43.802: INFO: namespace: e2e-tests-kubectl-nsmqw, resource: bindings, ignored listing per whitelist May 20 12:33:43.812: INFO: namespace e2e-tests-kubectl-nsmqw deletion completed in 6.284926113s • [SLOW TEST:8.952 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:33:43.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 20 12:33:43.889: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.341159ms) May 20 12:33:43.892: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.671071ms) May 20 12:33:43.895: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.484002ms) May 20 12:33:43.898: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.884968ms) May 20 12:33:43.900: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.683457ms) May 20 12:33:43.903: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.862927ms) May 20 12:33:43.906: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.205638ms) May 20 12:33:43.909: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.918749ms) May 20 12:33:43.912: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.959346ms) May 20 12:33:43.916: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.151232ms) May 20 12:33:43.919: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.153706ms) May 20 12:33:43.922: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.36825ms) May 20 12:33:43.947: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 24.516272ms) May 20 12:33:43.951: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.20889ms) May 20 12:33:43.955: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.238265ms) May 20 12:33:43.958: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.93347ms) May 20 12:33:43.961: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.238647ms) May 20 12:33:43.964: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.979552ms) May 20 12:33:43.968: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.116369ms) May 20 12:33:43.971: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.139713ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:33:43.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-7dphd" for this suite. May 20 12:33:49.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:33:50.051: INFO: namespace: e2e-tests-proxy-7dphd, resource: bindings, ignored listing per whitelist May 20 12:33:50.069: INFO: namespace e2e-tests-proxy-7dphd deletion completed in 6.094925301s • [SLOW TEST:6.257 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:33:50.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0520 12:34:30.775959 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 20 12:34:30.776: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:34:30.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-tltjm" for this suite. May 20 12:34:38.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:34:38.964: INFO: namespace: e2e-tests-gc-tltjm, resource: bindings, ignored listing per whitelist May 20 12:34:38.999: INFO: namespace e2e-tests-gc-tltjm deletion completed in 8.219665051s • [SLOW TEST:48.930 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:34:38.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 20 12:34:39.156: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:34:43.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-jrhf2" for this suite. May 20 12:35:33.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:35:33.340: INFO: namespace: e2e-tests-pods-jrhf2, resource: bindings, ignored listing per whitelist May 20 12:35:33.400: INFO: namespace e2e-tests-pods-jrhf2 deletion completed in 50.117735022s • [SLOW TEST:54.400 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:35:33.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:35:37.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-kq465" for this suite. May 20 12:36:27.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:36:27.561: INFO: namespace: e2e-tests-kubelet-test-kq465, resource: bindings, ignored listing per whitelist May 20 12:36:27.614: INFO: namespace e2e-tests-kubelet-test-kq465 deletion completed in 50.08692727s • [SLOW TEST:54.215 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:36:27.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 20 12:36:27.703: INFO: Creating deployment "test-recreate-deployment" May 20 12:36:27.718: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 20 12:36:27.732: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created May 20 12:36:29.740: INFO: Waiting deployment "test-recreate-deployment" to complete May 20 12:36:29.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725574987, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725574987, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725574987, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725574987, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 12:36:31.747: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 20 12:36:31.755: INFO: Updating deployment test-recreate-deployment May 20 12:36:31.755: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 20 12:36:32.041: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-pzpzr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pzpzr/deployments/test-recreate-deployment,UID:864acd8e-9a96-11ea-99e8-0242ac110002,ResourceVersion:11580951,Generation:2,CreationTimestamp:2020-05-20 12:36:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-20 12:36:31 +0000 UTC 2020-05-20 12:36:31 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-20 12:36:31 +0000 UTC 2020-05-20 12:36:27 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 20 12:36:32.268: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-pzpzr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pzpzr/replicasets/test-recreate-deployment-589c4bfd,UID:88c9b358-9a96-11ea-99e8-0242ac110002,ResourceVersion:11580950,Generation:1,CreationTimestamp:2020-05-20 12:36:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 864acd8e-9a96-11ea-99e8-0242ac110002 0xc001271b4f 0xc001271b70}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 20 12:36:32.268: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 20 12:36:32.268: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-pzpzr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pzpzr/replicasets/test-recreate-deployment-5bf7f65dc,UID:864f26da-9a96-11ea-99e8-0242ac110002,ResourceVersion:11580940,Generation:2,CreationTimestamp:2020-05-20 12:36:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 864acd8e-9a96-11ea-99e8-0242ac110002 0xc001271d20 0xc001271d21}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 20 12:36:32.272: INFO: Pod "test-recreate-deployment-589c4bfd-ll7x7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-ll7x7,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-pzpzr,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pzpzr/pods/test-recreate-deployment-589c4bfd-ll7x7,UID:88ca4680-9a96-11ea-99e8-0242ac110002,ResourceVersion:11580952,Generation:0,CreationTimestamp:2020-05-20 12:36:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 88c9b358-9a96-11ea-99e8-0242ac110002 0xc001a0e0ff 0xc001a0e1f0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l2mvf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l2mvf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-l2mvf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a0e260} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a0e280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:36:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:36:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:36:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-20 12:36:31 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-20 12:36:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:36:32.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-pzpzr" for this suite. May 20 12:36:38.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:36:38.509: INFO: namespace: e2e-tests-deployment-pzpzr, resource: bindings, ignored listing per whitelist May 20 12:36:38.538: INFO: namespace e2e-tests-deployment-pzpzr deletion completed in 6.262760165s • [SLOW TEST:10.924 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:36:38.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 20 12:36:38.660: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 12:36:38.727: INFO: Waiting for terminating namespaces to be deleted... May 20 12:36:38.729: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 20 12:36:38.739: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 20 12:36:38.739: INFO: Container kube-proxy ready: true, restart count 0 May 20 12:36:38.739: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 20 12:36:38.739: INFO: Container kindnet-cni ready: true, restart count 0 May 20 12:36:38.739: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 20 12:36:38.739: INFO: Container coredns ready: true, restart count 0 May 20 12:36:38.739: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 20 12:36:38.746: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 20 12:36:38.746: INFO: Container kindnet-cni ready: true, restart count 0 May 20 12:36:38.746: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 20 12:36:38.746: INFO: Container coredns ready: true, restart count 0 May 20 12:36:38.746: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 20 12:36:38.746: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1610bcbf69e6c6d3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:36:39.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-kdm89" for this suite. May 20 12:36:45.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:36:45.797: INFO: namespace: e2e-tests-sched-pred-kdm89, resource: bindings, ignored listing per whitelist May 20 12:36:45.861: INFO: namespace e2e-tests-sched-pred-kdm89 deletion completed in 6.089300405s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.323 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:36:45.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-2cgzc [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-2cgzc STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-2cgzc STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-2cgzc STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-2cgzc May 20 12:36:50.029: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-2cgzc, name: ss-0, uid: 91632ab6-9a96-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. May 20 12:36:51.245: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-2cgzc, name: ss-0, uid: 91632ab6-9a96-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 20 12:36:51.252: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-2cgzc, name: ss-0, uid: 91632ab6-9a96-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 20 12:36:51.320: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-2cgzc STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-2cgzc STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-2cgzc and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 20 12:37:05.568: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2cgzc May 20 12:37:05.572: INFO: Scaling statefulset ss to 0 May 20 12:37:15.591: INFO: Waiting for statefulset status.replicas updated to 0 May 20 12:37:15.594: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:37:15.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-2cgzc" for this suite. May 20 12:37:21.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:37:21.667: INFO: namespace: e2e-tests-statefulset-2cgzc, resource: bindings, ignored listing per whitelist May 20 12:37:21.749: INFO: namespace e2e-tests-statefulset-2cgzc deletion completed in 6.13197961s • [SLOW TEST:35.888 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:37:21.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-a6967c61-9a96-11ea-b520-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-a6967cd3-9a96-11ea-b520-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a6967c61-9a96-11ea-b520-0242ac110018 STEP: Updating configmap cm-test-opt-upd-a6967cd3-9a96-11ea-b520-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-a6967d0a-9a96-11ea-b520-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:37:30.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qnc9v" for this suite. May 20 12:37:52.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:37:52.084: INFO: namespace: e2e-tests-projected-qnc9v, resource: bindings, ignored listing per whitelist May 20 12:37:52.120: INFO: namespace e2e-tests-projected-qnc9v deletion completed in 22.116553401s • [SLOW TEST:30.371 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 20 12:37:52.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-z7phx STEP: creating a selector STEP: Creating the service pods in kubernetes May 20 12:37:52.224: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 20 12:38:18.358: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.197:8080/dial?request=hostName&protocol=udp&host=10.244.1.196&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-z7phx PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 12:38:18.358: INFO: >>> kubeConfig: /root/.kube/config I0520 12:38:18.395991 7 log.go:172] (0xc000825d90) (0xc000b71860) Create stream I0520 12:38:18.396022 7 log.go:172] (0xc000825d90) (0xc000b71860) Stream added, broadcasting: 1 I0520 12:38:18.398907 7 log.go:172] (0xc000825d90) Reply frame received for 1 I0520 12:38:18.398962 7 log.go:172] (0xc000825d90) (0xc000b71900) Create stream I0520 12:38:18.398978 7 log.go:172] (0xc000825d90) (0xc000b71900) Stream added, broadcasting: 3 I0520 12:38:18.400098 7 log.go:172] (0xc000825d90) Reply frame received for 3 I0520 12:38:18.400138 7 log.go:172] (0xc000825d90) (0xc001cef540) Create stream I0520 12:38:18.400154 7 log.go:172] (0xc000825d90) (0xc001cef540) Stream added, broadcasting: 5 I0520 12:38:18.401471 7 log.go:172] (0xc000825d90) Reply frame received for 5 I0520 12:38:18.507552 7 log.go:172] (0xc000825d90) Data frame received for 3 I0520 12:38:18.507586 7 log.go:172] (0xc000b71900) (3) Data frame handling I0520 12:38:18.507606 7 log.go:172] (0xc000b71900) (3) Data frame sent I0520 12:38:18.508590 7 log.go:172] (0xc000825d90) Data frame received for 3 I0520 12:38:18.508634 7 log.go:172] (0xc000b71900) (3) Data frame handling I0520 12:38:18.508682 7 log.go:172] (0xc000825d90) Data frame received for 5 I0520 12:38:18.508714 7 log.go:172] (0xc001cef540) (5) Data frame handling I0520 12:38:18.510578 7 log.go:172] (0xc000825d90) Data frame received for 1 I0520 12:38:18.510602 7 log.go:172] (0xc000b71860) (1) Data frame handling I0520 12:38:18.510617 7 log.go:172] (0xc000b71860) (1) Data frame sent I0520 12:38:18.510640 7 log.go:172] (0xc000825d90) (0xc000b71860) Stream removed, broadcasting: 1 I0520 12:38:18.510669 7 log.go:172] (0xc000825d90) Go away received I0520 12:38:18.510847 7 log.go:172] (0xc000825d90) (0xc000b71860) Stream removed, broadcasting: 1 I0520 12:38:18.510878 7 log.go:172] (0xc000825d90) (0xc000b71900) Stream removed, broadcasting: 3 I0520 12:38:18.510894 7 log.go:172] (0xc000825d90) (0xc001cef540) Stream removed, broadcasting: 5 May 20 12:38:18.510: INFO: Waiting for endpoints: map[] May 20 12:38:18.525: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.197:8080/dial?request=hostName&protocol=udp&host=10.244.2.235&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-z7phx PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 12:38:18.525: INFO: >>> kubeConfig: /root/.kube/config I0520 12:38:18.561498 7 log.go:172] (0xc001c94370) (0xc0022f80a0) Create stream I0520 12:38:18.561526 7 log.go:172] (0xc001c94370) (0xc0022f80a0) Stream added, broadcasting: 1 I0520 12:38:18.564461 7 log.go:172] (0xc001c94370) Reply frame received for 1 I0520 12:38:18.564501 7 log.go:172] (0xc001c94370) (0xc001cef5e0) Create stream I0520 12:38:18.564515 7 log.go:172] (0xc001c94370) (0xc001cef5e0) Stream added, broadcasting: 3 I0520 12:38:18.565801 7 log.go:172] (0xc001c94370) Reply frame received for 3 I0520 12:38:18.565839 7 log.go:172] (0xc001c94370) (0xc001cef720) Create stream I0520 12:38:18.565852 7 log.go:172] (0xc001c94370) (0xc001cef720) Stream added, broadcasting: 5 I0520 12:38:18.566735 7 log.go:172] (0xc001c94370) Reply frame received for 5 I0520 12:38:18.650456 7 log.go:172] (0xc001c94370) Data frame received for 3 I0520 12:38:18.650489 7 log.go:172] (0xc001cef5e0) (3) Data frame handling I0520 12:38:18.650512 7 log.go:172] (0xc001cef5e0) (3) Data frame sent I0520 12:38:18.651061 7 log.go:172] (0xc001c94370) Data frame received for 3 I0520 12:38:18.651087 7 log.go:172] (0xc001cef5e0) (3) Data frame handling I0520 12:38:18.651292 7 log.go:172] (0xc001c94370) Data frame received for 5 I0520 12:38:18.651311 7 log.go:172] (0xc001cef720) (5) Data frame handling I0520 12:38:18.653011 7 log.go:172] (0xc001c94370) Data frame received for 1 I0520 12:38:18.653029 7 log.go:172] (0xc0022f80a0) (1) Data frame handling I0520 12:38:18.653038 7 log.go:172] (0xc0022f80a0) (1) Data frame sent I0520 12:38:18.653052 7 log.go:172] (0xc001c94370) (0xc0022f80a0) Stream removed, broadcasting: 1 I0520 12:38:18.653263 7 log.go:172] (0xc001c94370) (0xc0022f80a0) Stream removed, broadcasting: 1 I0520 12:38:18.653281 7 log.go:172] (0xc001c94370) (0xc001cef5e0) Stream removed, broadcasting: 3 I0520 12:38:18.653347 7 log.go:172] (0xc001c94370) Go away received I0520 12:38:18.653431 7 log.go:172] (0xc001c94370) (0xc001cef720) Stream removed, broadcasting: 5 May 20 12:38:18.653: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 20 12:38:18.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-z7phx" for this suite. May 20 12:38:42.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 20 12:38:42.721: INFO: namespace: e2e-tests-pod-network-test-z7phx, resource: bindings, ignored listing per whitelist May 20 12:38:42.740: INFO: namespace e2e-tests-pod-network-test-z7phx deletion completed in 24.083187988s • [SLOW TEST:50.619 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ May 20 12:38:42.740: INFO: Running AfterSuite actions on all nodes May 20 12:38:42.740: INFO: Running AfterSuite actions on node 1 May 20 12:38:42.740: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6708.718 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS