I0106 17:15:30.950152 6 e2e.go:224] Starting e2e run "c73021fa-5042-11eb-8655-0242ac110009" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1609953330 - Will randomize all specs Will run 201 of 2164 specs Jan 6 17:15:31.119: INFO: >>> kubeConfig: /root/.kube/config Jan 6 17:15:31.122: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 6 17:15:31.137: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 6 17:15:31.167: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 6 17:15:31.167: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 6 17:15:31.167: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 6 17:15:31.177: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jan 6 17:15:31.177: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 6 17:15:31.177: INFO: e2e test version: v1.13.12 Jan 6 17:15:31.178: INFO: kube-apiserver version: v1.13.12 SSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:15:31.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Jan 6 17:15:31.239: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 6 17:15:31.240: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:15:35.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-bb6mm" for this suite. Jan 6 17:16:13.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:16:14.145: INFO: namespace: e2e-tests-pods-bb6mm, resource: bindings, ignored listing per whitelist Jan 6 17:16:14.168: INFO: namespace e2e-tests-pods-bb6mm deletion completed in 38.787825811s • [SLOW TEST:42.991 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:16:14.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-jtgmh in namespace e2e-tests-proxy-jdpqd I0106 17:16:14.348054 6 runners.go:184] Created replication controller with name: proxy-service-jtgmh, namespace: e2e-tests-proxy-jdpqd, replica count: 1 I0106 17:16:15.398435 6 runners.go:184] proxy-service-jtgmh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0106 17:16:16.398655 6 runners.go:184] proxy-service-jtgmh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0106 17:16:17.398861 6 runners.go:184] proxy-service-jtgmh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0106 17:16:18.399015 6 runners.go:184] proxy-service-jtgmh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0106 17:16:19.399262 6 runners.go:184] proxy-service-jtgmh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0106 17:16:20.399498 6 runners.go:184] proxy-service-jtgmh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0106 17:16:21.399734 6 runners.go:184] proxy-service-jtgmh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0106 17:16:22.399946 6 runners.go:184] proxy-service-jtgmh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0106 17:16:23.400164 6 runners.go:184] proxy-service-jtgmh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0106 17:16:24.400398 6 runners.go:184] proxy-service-jtgmh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0106 17:16:25.400607 6 runners.go:184] proxy-service-jtgmh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0106 17:16:26.400819 6 runners.go:184] proxy-service-jtgmh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0106 17:16:27.401126 6 runners.go:184] proxy-service-jtgmh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0106 17:16:28.401393 6 runners.go:184] proxy-service-jtgmh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0106 17:16:29.401582 6 runners.go:184] proxy-service-jtgmh Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 6 17:16:29.404: INFO: setup took 15.149132598s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jan 6 17:16:29.410: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-jdpqd/pods/proxy-service-jtgmh-cl29p:162/proxy/: bar (200; 5.576937ms) Jan 6 17:16:29.410: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-jdpqd/pods/http:proxy-service-jtgmh-cl29p:162/proxy/: bar (200; 5.489467ms) Jan 6 17:16:29.411: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-jdpqd/pods/http:proxy-service-jtgmh-cl29p:160/proxy/: foo (200; 6.229822ms) Jan 6 17:16:29.411: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-jdpqd/pods/proxy-service-jtgmh-cl29p:160/proxy/: foo (200; 6.24789ms) Jan 6 17:16:29.411: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-jdpqd/services/proxy-service-jtgmh:portname2/proxy/: bar (200; 6.328533ms) Jan 6 17:16:29.411: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-jdpqd/pods/proxy-service-jtgmh-cl29p:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 6 17:16:41.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-z5t74' Jan 6 17:16:43.487: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 6 17:16:43.488: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Jan 6 17:16:45.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-z5t74' Jan 6 17:16:45.921: INFO: stderr: "" Jan 6 17:16:45.921: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:16:45.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-z5t74" for this suite. Jan 6 17:16:51.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:16:51.992: INFO: namespace: e2e-tests-kubectl-z5t74, resource: bindings, ignored listing per whitelist Jan 6 17:16:52.050: INFO: namespace e2e-tests-kubectl-z5t74 deletion completed in 6.123990742s • [SLOW TEST:11.017 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:16:52.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-f7dab6a8-5042-11eb-8655-0242ac110009 STEP: Creating a pod to test consume configMaps Jan 6 17:16:52.143: INFO: Waiting up to 5m0s for pod "pod-configmaps-f7db737c-5042-11eb-8655-0242ac110009" in namespace "e2e-tests-configmap-g48rs" to be "success or failure" Jan 6 17:16:52.168: INFO: Pod "pod-configmaps-f7db737c-5042-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 25.729232ms Jan 6 17:16:54.199: INFO: Pod "pod-configmaps-f7db737c-5042-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056712853s Jan 6 17:16:56.204: INFO: Pod "pod-configmaps-f7db737c-5042-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061316741s STEP: Saw pod success Jan 6 17:16:56.204: INFO: Pod "pod-configmaps-f7db737c-5042-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:16:56.207: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-f7db737c-5042-11eb-8655-0242ac110009 container configmap-volume-test: STEP: delete the pod Jan 6 17:16:56.219: INFO: Waiting for pod pod-configmaps-f7db737c-5042-11eb-8655-0242ac110009 to disappear Jan 6 17:16:56.224: INFO: Pod pod-configmaps-f7db737c-5042-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:16:56.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-g48rs" for this suite. Jan 6 17:17:02.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:17:02.267: INFO: namespace: e2e-tests-configmap-g48rs, resource: bindings, ignored listing per whitelist Jan 6 17:17:02.323: INFO: namespace e2e-tests-configmap-g48rs deletion completed in 6.096027356s • [SLOW TEST:10.273 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:17:02.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-fdfc2a7b-5042-11eb-8655-0242ac110009 STEP: Creating a pod to test consume configMaps Jan 6 17:17:02.455: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fe01798c-5042-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-799v5" to be "success or failure" Jan 6 17:17:02.485: INFO: Pod "pod-projected-configmaps-fe01798c-5042-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 29.39785ms Jan 6 17:17:04.553: INFO: Pod "pod-projected-configmaps-fe01798c-5042-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098111953s Jan 6 17:17:06.570: INFO: Pod "pod-projected-configmaps-fe01798c-5042-11eb-8655-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.114986466s Jan 6 17:17:08.574: INFO: Pod "pod-projected-configmaps-fe01798c-5042-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.11882432s STEP: Saw pod success Jan 6 17:17:08.574: INFO: Pod "pod-projected-configmaps-fe01798c-5042-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:17:08.577: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-fe01798c-5042-11eb-8655-0242ac110009 container projected-configmap-volume-test: STEP: delete the pod Jan 6 17:17:08.611: INFO: Waiting for pod pod-projected-configmaps-fe01798c-5042-11eb-8655-0242ac110009 to disappear Jan 6 17:17:08.627: INFO: Pod pod-projected-configmaps-fe01798c-5042-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:17:08.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-799v5" for this suite. Jan 6 17:17:14.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:17:14.687: INFO: namespace: e2e-tests-projected-799v5, resource: bindings, ignored listing per whitelist Jan 6 17:17:14.733: INFO: namespace e2e-tests-projected-799v5 deletion completed in 6.101844736s • [SLOW TEST:12.410 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:17:14.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:17:14.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-fctxd" for this suite. Jan 6 17:17:36.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:17:37.053: INFO: namespace: e2e-tests-pods-fctxd, resource: bindings, ignored listing per whitelist Jan 6 17:17:37.130: INFO: namespace e2e-tests-pods-fctxd deletion completed in 22.190314135s • [SLOW TEST:22.396 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:17:37.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 6 17:17:37.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-ftt74' Jan 6 17:17:37.299: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 6 17:17:37.299: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jan 6 17:17:37.303: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jan 6 17:17:37.310: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jan 6 17:17:37.355: INFO: scanned /root for discovery docs: Jan 6 17:17:37.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-ftt74' Jan 6 17:17:54.687: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 6 17:17:54.687: INFO: stdout: "Created e2e-test-nginx-rc-c80ef0d13ad3ea1630301a2e14e3cd3a\nScaling up e2e-test-nginx-rc-c80ef0d13ad3ea1630301a2e14e3cd3a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-c80ef0d13ad3ea1630301a2e14e3cd3a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-c80ef0d13ad3ea1630301a2e14e3cd3a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jan 6 17:17:54.687: INFO: stdout: "Created e2e-test-nginx-rc-c80ef0d13ad3ea1630301a2e14e3cd3a\nScaling up e2e-test-nginx-rc-c80ef0d13ad3ea1630301a2e14e3cd3a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-c80ef0d13ad3ea1630301a2e14e3cd3a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-c80ef0d13ad3ea1630301a2e14e3cd3a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jan 6 17:17:54.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ftt74' Jan 6 17:17:54.782: INFO: stderr: "" Jan 6 17:17:54.782: INFO: stdout: "e2e-test-nginx-rc-c80ef0d13ad3ea1630301a2e14e3cd3a-vccrg " Jan 6 17:17:54.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-c80ef0d13ad3ea1630301a2e14e3cd3a-vccrg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ftt74' Jan 6 17:17:54.880: INFO: stderr: "" Jan 6 17:17:54.880: INFO: stdout: "true" Jan 6 17:17:54.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-c80ef0d13ad3ea1630301a2e14e3cd3a-vccrg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ftt74' Jan 6 17:17:55.015: INFO: stderr: "" Jan 6 17:17:55.015: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jan 6 17:17:55.015: INFO: e2e-test-nginx-rc-c80ef0d13ad3ea1630301a2e14e3cd3a-vccrg is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Jan 6 17:17:55.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ftt74' Jan 6 17:17:55.179: INFO: stderr: "" Jan 6 17:17:55.179: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:17:55.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ftt74" for this suite. Jan 6 17:18:17.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:18:17.254: INFO: namespace: e2e-tests-kubectl-ftt74, resource: bindings, ignored listing per whitelist Jan 6 17:18:17.347: INFO: namespace e2e-tests-kubectl-ftt74 deletion completed in 22.151751916s • [SLOW TEST:40.217 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:18:17.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 6 17:18:17.493: INFO: Waiting up to 5m0s for pod "downward-api-2ab9914d-5043-11eb-8655-0242ac110009" in namespace "e2e-tests-downward-api-tm6sd" to be "success or failure" Jan 6 17:18:17.497: INFO: Pod "downward-api-2ab9914d-5043-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.482703ms Jan 6 17:18:19.501: INFO: Pod "downward-api-2ab9914d-5043-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007614332s Jan 6 17:18:21.505: INFO: Pod "downward-api-2ab9914d-5043-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011865392s STEP: Saw pod success Jan 6 17:18:21.505: INFO: Pod "downward-api-2ab9914d-5043-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:18:21.508: INFO: Trying to get logs from node hunter-worker2 pod downward-api-2ab9914d-5043-11eb-8655-0242ac110009 container dapi-container: STEP: delete the pod Jan 6 17:18:21.610: INFO: Waiting for pod downward-api-2ab9914d-5043-11eb-8655-0242ac110009 to disappear Jan 6 17:18:21.620: INFO: Pod downward-api-2ab9914d-5043-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:18:21.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tm6sd" for this suite. Jan 6 17:18:27.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:18:27.674: INFO: namespace: e2e-tests-downward-api-tm6sd, resource: bindings, ignored listing per whitelist Jan 6 17:18:27.724: INFO: namespace e2e-tests-downward-api-tm6sd deletion completed in 6.101316597s • [SLOW TEST:10.377 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:18:27.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-65h95 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Jan 6 17:18:27.859: INFO: Found 0 stateful pods, waiting for 3 Jan 6 17:18:37.865: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 6 17:18:37.865: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 6 17:18:37.865: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 6 17:18:47.865: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 6 17:18:47.865: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 6 17:18:47.865: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 6 17:18:47.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-65h95 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 6 17:18:48.140: INFO: stderr: "I0106 17:18:48.003087 216 log.go:172] (0xc000160840) (0xc0005e1400) Create stream\nI0106 17:18:48.003168 216 log.go:172] (0xc000160840) (0xc0005e1400) Stream added, broadcasting: 1\nI0106 17:18:48.006099 216 log.go:172] (0xc000160840) Reply frame received for 1\nI0106 17:18:48.006140 216 log.go:172] (0xc000160840) (0xc0006c4000) Create stream\nI0106 17:18:48.006153 216 log.go:172] (0xc000160840) (0xc0006c4000) Stream added, broadcasting: 3\nI0106 17:18:48.006887 216 log.go:172] (0xc000160840) Reply frame received for 3\nI0106 17:18:48.006922 216 log.go:172] (0xc000160840) (0xc0005e14a0) Create stream\nI0106 17:18:48.006929 216 log.go:172] (0xc000160840) (0xc0005e14a0) Stream added, broadcasting: 5\nI0106 17:18:48.007711 216 log.go:172] (0xc000160840) Reply frame received for 5\nI0106 17:18:48.131205 216 log.go:172] (0xc000160840) Data frame received for 3\nI0106 17:18:48.131266 216 log.go:172] (0xc0006c4000) (3) Data frame handling\nI0106 17:18:48.131283 216 log.go:172] (0xc0006c4000) (3) Data frame sent\nI0106 17:18:48.131296 216 log.go:172] (0xc000160840) Data frame received for 3\nI0106 17:18:48.131305 216 log.go:172] (0xc0006c4000) (3) Data frame handling\nI0106 17:18:48.131330 216 log.go:172] (0xc000160840) Data frame received for 5\nI0106 17:18:48.131351 216 log.go:172] (0xc0005e14a0) (5) Data frame handling\nI0106 17:18:48.133387 216 log.go:172] (0xc000160840) Data frame received for 1\nI0106 17:18:48.133427 216 log.go:172] (0xc0005e1400) (1) Data frame handling\nI0106 17:18:48.133448 216 log.go:172] (0xc0005e1400) (1) Data frame sent\nI0106 17:18:48.133466 216 log.go:172] (0xc000160840) (0xc0005e1400) Stream removed, broadcasting: 1\nI0106 17:18:48.133681 216 log.go:172] (0xc000160840) (0xc0005e1400) Stream removed, broadcasting: 1\nI0106 17:18:48.133712 216 log.go:172] (0xc000160840) (0xc0006c4000) Stream removed, broadcasting: 3\nI0106 17:18:48.133884 216 log.go:172] (0xc000160840) (0xc0005e14a0) Stream removed, broadcasting: 5\n" Jan 6 17:18:48.140: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 6 17:18:48.140: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 6 17:18:58.171: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 6 17:19:08.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-65h95 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 17:19:08.467: INFO: stderr: "I0106 17:19:08.370522 239 log.go:172] (0xc00015c580) (0xc0007f5220) Create stream\nI0106 17:19:08.370596 239 log.go:172] (0xc00015c580) (0xc0007f5220) Stream added, broadcasting: 1\nI0106 17:19:08.373231 239 log.go:172] (0xc00015c580) Reply frame received for 1\nI0106 17:19:08.373281 239 log.go:172] (0xc00015c580) (0xc000780000) Create stream\nI0106 17:19:08.373301 239 log.go:172] (0xc00015c580) (0xc000780000) Stream added, broadcasting: 3\nI0106 17:19:08.374213 239 log.go:172] (0xc00015c580) Reply frame received for 3\nI0106 17:19:08.374277 239 log.go:172] (0xc00015c580) (0xc00031a000) Create stream\nI0106 17:19:08.374301 239 log.go:172] (0xc00015c580) (0xc00031a000) Stream added, broadcasting: 5\nI0106 17:19:08.375251 239 log.go:172] (0xc00015c580) Reply frame received for 5\nI0106 17:19:08.461480 239 log.go:172] (0xc00015c580) Data frame received for 3\nI0106 17:19:08.461574 239 log.go:172] (0xc000780000) (3) Data frame handling\nI0106 17:19:08.461618 239 log.go:172] (0xc000780000) (3) Data frame sent\nI0106 17:19:08.461734 239 log.go:172] (0xc00015c580) Data frame received for 3\nI0106 17:19:08.461773 239 log.go:172] (0xc000780000) (3) Data frame handling\nI0106 17:19:08.461801 239 log.go:172] (0xc00015c580) Data frame received for 5\nI0106 17:19:08.461815 239 log.go:172] (0xc00031a000) (5) Data frame handling\nI0106 17:19:08.463321 239 log.go:172] (0xc00015c580) Data frame received for 1\nI0106 17:19:08.463342 239 log.go:172] (0xc0007f5220) (1) Data frame handling\nI0106 17:19:08.463352 239 log.go:172] (0xc0007f5220) (1) Data frame sent\nI0106 17:19:08.463369 239 log.go:172] (0xc00015c580) (0xc0007f5220) Stream removed, broadcasting: 1\nI0106 17:19:08.463534 239 log.go:172] (0xc00015c580) Go away received\nI0106 17:19:08.463562 239 log.go:172] (0xc00015c580) (0xc0007f5220) Stream removed, broadcasting: 1\nI0106 17:19:08.463666 239 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0xc000780000), 0x5:(*spdystream.Stream)(0xc00031a000)}\nI0106 17:19:08.463717 239 log.go:172] (0xc00015c580) (0xc000780000) Stream removed, broadcasting: 3\nI0106 17:19:08.463741 239 log.go:172] (0xc00015c580) (0xc00031a000) Stream removed, broadcasting: 5\n" Jan 6 17:19:08.467: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 6 17:19:08.467: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' STEP: Rolling back to a previous revision Jan 6 17:19:38.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-65h95 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 6 17:19:38.811: INFO: stderr: "I0106 17:19:38.669697 263 log.go:172] (0xc0007bc370) (0xc000229360) Create stream\nI0106 17:19:38.669763 263 log.go:172] (0xc0007bc370) (0xc000229360) Stream added, broadcasting: 1\nI0106 17:19:38.672374 263 log.go:172] (0xc0007bc370) Reply frame received for 1\nI0106 17:19:38.672406 263 log.go:172] (0xc0007bc370) (0xc000718000) Create stream\nI0106 17:19:38.672416 263 log.go:172] (0xc0007bc370) (0xc000718000) Stream added, broadcasting: 3\nI0106 17:19:38.673465 263 log.go:172] (0xc0007bc370) Reply frame received for 3\nI0106 17:19:38.673523 263 log.go:172] (0xc0007bc370) (0xc0001fe000) Create stream\nI0106 17:19:38.673540 263 log.go:172] (0xc0007bc370) (0xc0001fe000) Stream added, broadcasting: 5\nI0106 17:19:38.674377 263 log.go:172] (0xc0007bc370) Reply frame received for 5\nI0106 17:19:38.805385 263 log.go:172] (0xc0007bc370) Data frame received for 3\nI0106 17:19:38.805440 263 log.go:172] (0xc000718000) (3) Data frame handling\nI0106 17:19:38.805453 263 log.go:172] (0xc000718000) (3) Data frame sent\nI0106 17:19:38.805461 263 log.go:172] (0xc0007bc370) Data frame received for 3\nI0106 17:19:38.805466 263 log.go:172] (0xc000718000) (3) Data frame handling\nI0106 17:19:38.805491 263 log.go:172] (0xc0007bc370) Data frame received for 5\nI0106 17:19:38.805496 263 log.go:172] (0xc0001fe000) (5) Data frame handling\nI0106 17:19:38.807242 263 log.go:172] (0xc0007bc370) Data frame received for 1\nI0106 17:19:38.807262 263 log.go:172] (0xc000229360) (1) Data frame handling\nI0106 17:19:38.807272 263 log.go:172] (0xc000229360) (1) Data frame sent\nI0106 17:19:38.807292 263 log.go:172] (0xc0007bc370) (0xc000229360) Stream removed, broadcasting: 1\nI0106 17:19:38.807342 263 log.go:172] (0xc0007bc370) Go away received\nI0106 17:19:38.807481 263 log.go:172] (0xc0007bc370) (0xc000229360) Stream removed, broadcasting: 1\nI0106 17:19:38.807497 263 log.go:172] (0xc0007bc370) (0xc000718000) Stream removed, broadcasting: 3\nI0106 17:19:38.807507 263 log.go:172] (0xc0007bc370) (0xc0001fe000) Stream removed, broadcasting: 5\n" Jan 6 17:19:38.811: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 6 17:19:38.811: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 6 17:19:48.846: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 6 17:19:58.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-65h95 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 17:19:59.131: INFO: stderr: "I0106 17:19:59.025003 286 log.go:172] (0xc00013a630) (0xc00072c640) Create stream\nI0106 17:19:59.025078 286 log.go:172] (0xc00013a630) (0xc00072c640) Stream added, broadcasting: 1\nI0106 17:19:59.027637 286 log.go:172] (0xc00013a630) Reply frame received for 1\nI0106 17:19:59.027694 286 log.go:172] (0xc00013a630) (0xc000506e60) Create stream\nI0106 17:19:59.027718 286 log.go:172] (0xc00013a630) (0xc000506e60) Stream added, broadcasting: 3\nI0106 17:19:59.028751 286 log.go:172] (0xc00013a630) Reply frame received for 3\nI0106 17:19:59.028828 286 log.go:172] (0xc00013a630) (0xc000506fa0) Create stream\nI0106 17:19:59.028995 286 log.go:172] (0xc00013a630) (0xc000506fa0) Stream added, broadcasting: 5\nI0106 17:19:59.029943 286 log.go:172] (0xc00013a630) Reply frame received for 5\nI0106 17:19:59.125550 286 log.go:172] (0xc00013a630) Data frame received for 5\nI0106 17:19:59.125604 286 log.go:172] (0xc00013a630) Data frame received for 3\nI0106 17:19:59.125661 286 log.go:172] (0xc000506e60) (3) Data frame handling\nI0106 17:19:59.125689 286 log.go:172] (0xc000506e60) (3) Data frame sent\nI0106 17:19:59.125709 286 log.go:172] (0xc00013a630) Data frame received for 3\nI0106 17:19:59.125727 286 log.go:172] (0xc000506e60) (3) Data frame handling\nI0106 17:19:59.125746 286 log.go:172] (0xc000506fa0) (5) Data frame handling\nI0106 17:19:59.126235 286 log.go:172] (0xc00013a630) Data frame received for 1\nI0106 17:19:59.126274 286 log.go:172] (0xc00072c640) (1) Data frame handling\nI0106 17:19:59.126295 286 log.go:172] (0xc00072c640) (1) Data frame sent\nI0106 17:19:59.126312 286 log.go:172] (0xc00013a630) (0xc00072c640) Stream removed, broadcasting: 1\nI0106 17:19:59.126362 286 log.go:172] (0xc00013a630) Go away received\nI0106 17:19:59.126636 286 log.go:172] (0xc00013a630) (0xc00072c640) Stream removed, broadcasting: 1\nI0106 17:19:59.126678 286 log.go:172] (0xc00013a630) (0xc000506e60) Stream removed, broadcasting: 3\nI0106 17:19:59.126704 286 log.go:172] (0xc00013a630) (0xc000506fa0) Stream removed, broadcasting: 5\n" Jan 6 17:19:59.131: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 6 17:19:59.131: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 6 17:20:09.152: INFO: Waiting for StatefulSet e2e-tests-statefulset-65h95/ss2 to complete update Jan 6 17:20:09.152: INFO: Waiting for Pod e2e-tests-statefulset-65h95/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 6 17:20:09.152: INFO: Waiting for Pod e2e-tests-statefulset-65h95/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 6 17:20:19.161: INFO: Waiting for StatefulSet e2e-tests-statefulset-65h95/ss2 to complete update Jan 6 17:20:19.161: INFO: Waiting for Pod e2e-tests-statefulset-65h95/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 6 17:20:29.163: INFO: Waiting for StatefulSet e2e-tests-statefulset-65h95/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 6 17:20:39.159: INFO: Deleting all statefulset in ns e2e-tests-statefulset-65h95 Jan 6 17:20:39.162: INFO: Scaling statefulset ss2 to 0 Jan 6 17:20:59.183: INFO: Waiting for statefulset status.replicas updated to 0 Jan 6 17:20:59.186: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:20:59.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-65h95" for this suite. Jan 6 17:21:07.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:21:07.340: INFO: namespace: e2e-tests-statefulset-65h95, resource: bindings, ignored listing per whitelist Jan 6 17:21:07.394: INFO: namespace e2e-tests-statefulset-65h95 deletion completed in 8.190793764s • [SLOW TEST:159.669 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:21:07.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-90137169-5043-11eb-8655-0242ac110009 STEP: Creating secret with name secret-projected-all-test-volume-90137136-5043-11eb-8655-0242ac110009 STEP: Creating a pod to test Check all projections for projected volume plugin Jan 6 17:21:07.540: INFO: Waiting up to 5m0s for pod "projected-volume-901370bc-5043-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-8z24b" to be "success or failure" Jan 6 17:21:07.562: INFO: Pod "projected-volume-901370bc-5043-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 21.618695ms Jan 6 17:21:09.776: INFO: Pod "projected-volume-901370bc-5043-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23566094s Jan 6 17:21:11.779: INFO: Pod "projected-volume-901370bc-5043-11eb-8655-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.239497074s Jan 6 17:21:13.784: INFO: Pod "projected-volume-901370bc-5043-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.243825443s STEP: Saw pod success Jan 6 17:21:13.784: INFO: Pod "projected-volume-901370bc-5043-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:21:13.786: INFO: Trying to get logs from node hunter-worker pod projected-volume-901370bc-5043-11eb-8655-0242ac110009 container projected-all-volume-test: STEP: delete the pod Jan 6 17:21:13.806: INFO: Waiting for pod projected-volume-901370bc-5043-11eb-8655-0242ac110009 to disappear Jan 6 17:21:13.825: INFO: Pod projected-volume-901370bc-5043-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:21:13.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8z24b" for this suite. Jan 6 17:21:19.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:21:19.928: INFO: namespace: e2e-tests-projected-8z24b, resource: bindings, ignored listing per whitelist Jan 6 17:21:19.952: INFO: namespace e2e-tests-projected-8z24b deletion completed in 6.12322269s • [SLOW TEST:12.558 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:21:19.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 6 17:21:20.041: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97882a67-5043-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-vxsgd" to be "success or failure" Jan 6 17:21:20.043: INFO: Pod "downwardapi-volume-97882a67-5043-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.680092ms Jan 6 17:21:22.047: INFO: Pod "downwardapi-volume-97882a67-5043-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006782914s Jan 6 17:21:24.052: INFO: Pod "downwardapi-volume-97882a67-5043-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011147139s STEP: Saw pod success Jan 6 17:21:24.052: INFO: Pod "downwardapi-volume-97882a67-5043-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:21:24.055: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-97882a67-5043-11eb-8655-0242ac110009 container client-container: STEP: delete the pod Jan 6 17:21:24.084: INFO: Waiting for pod downwardapi-volume-97882a67-5043-11eb-8655-0242ac110009 to disappear Jan 6 17:21:24.101: INFO: Pod downwardapi-volume-97882a67-5043-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:21:24.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vxsgd" for this suite. Jan 6 17:21:30.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:21:30.241: INFO: namespace: e2e-tests-projected-vxsgd, resource: bindings, ignored listing per whitelist Jan 6 17:21:30.242: INFO: namespace e2e-tests-projected-vxsgd deletion completed in 6.136077041s • [SLOW TEST:10.289 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:21:30.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 6 17:21:50.402: INFO: Container started at 2021-01-06 17:21:33 +0000 UTC, pod became ready at 2021-01-06 17:21:48 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:21:50.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-sdtdb" for this suite. Jan 6 17:22:14.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:22:14.501: INFO: namespace: e2e-tests-container-probe-sdtdb, resource: bindings, ignored listing per whitelist Jan 6 17:22:14.533: INFO: namespace e2e-tests-container-probe-sdtdb deletion completed in 24.126609958s • [SLOW TEST:44.291 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:22:14.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 6 17:22:15.236: INFO: Pod name wrapped-volume-race-b86bfceb-5043-11eb-8655-0242ac110009: Found 0 pods out of 5 Jan 6 17:22:20.245: INFO: Pod name wrapped-volume-race-b86bfceb-5043-11eb-8655-0242ac110009: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b86bfceb-5043-11eb-8655-0242ac110009 in namespace e2e-tests-emptydir-wrapper-m4b7b, will wait for the garbage collector to delete the pods Jan 6 17:24:26.339: INFO: Deleting ReplicationController wrapped-volume-race-b86bfceb-5043-11eb-8655-0242ac110009 took: 7.34461ms Jan 6 17:24:26.439: INFO: Terminating ReplicationController wrapped-volume-race-b86bfceb-5043-11eb-8655-0242ac110009 pods took: 100.265081ms STEP: Creating RC which spawns configmap-volume pods Jan 6 17:25:15.304: INFO: Pod name wrapped-volume-race-23bb3742-5044-11eb-8655-0242ac110009: Found 0 pods out of 5 Jan 6 17:25:20.343: INFO: Pod name wrapped-volume-race-23bb3742-5044-11eb-8655-0242ac110009: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-23bb3742-5044-11eb-8655-0242ac110009 in namespace e2e-tests-emptydir-wrapper-m4b7b, will wait for the garbage collector to delete the pods Jan 6 17:27:58.422: INFO: Deleting ReplicationController wrapped-volume-race-23bb3742-5044-11eb-8655-0242ac110009 took: 6.952755ms Jan 6 17:27:58.523: INFO: Terminating ReplicationController wrapped-volume-race-23bb3742-5044-11eb-8655-0242ac110009 pods took: 100.215788ms STEP: Creating RC which spawns configmap-volume pods Jan 6 17:28:35.849: INFO: Pod name wrapped-volume-race-9b49bfe8-5044-11eb-8655-0242ac110009: Found 0 pods out of 5 Jan 6 17:28:40.855: INFO: Pod name wrapped-volume-race-9b49bfe8-5044-11eb-8655-0242ac110009: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9b49bfe8-5044-11eb-8655-0242ac110009 in namespace e2e-tests-emptydir-wrapper-m4b7b, will wait for the garbage collector to delete the pods Jan 6 17:30:48.993: INFO: Deleting ReplicationController wrapped-volume-race-9b49bfe8-5044-11eb-8655-0242ac110009 took: 8.112263ms Jan 6 17:30:49.094: INFO: Terminating ReplicationController wrapped-volume-race-9b49bfe8-5044-11eb-8655-0242ac110009 pods took: 100.269493ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:31:36.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-m4b7b" for this suite. Jan 6 17:31:44.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:31:44.882: INFO: namespace: e2e-tests-emptydir-wrapper-m4b7b, resource: bindings, ignored listing per whitelist Jan 6 17:31:44.918: INFO: namespace e2e-tests-emptydir-wrapper-m4b7b deletion completed in 8.105193578s • [SLOW TEST:570.384 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:31:44.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Jan 6 17:31:49.129: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:32:13.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-2wbch" for this suite. Jan 6 17:32:19.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:32:19.238: INFO: namespace: e2e-tests-namespaces-2wbch, resource: bindings, ignored listing per whitelist Jan 6 17:32:19.322: INFO: namespace e2e-tests-namespaces-2wbch deletion completed in 6.109046967s STEP: Destroying namespace "e2e-tests-nsdeletetest-zwp5z" for this suite. Jan 6 17:32:19.346: INFO: Namespace e2e-tests-nsdeletetest-zwp5z was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-bfspf" for this suite. Jan 6 17:32:25.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:32:25.472: INFO: namespace: e2e-tests-nsdeletetest-bfspf, resource: bindings, ignored listing per whitelist Jan 6 17:32:25.476: INFO: namespace e2e-tests-nsdeletetest-bfspf deletion completed in 6.130342918s • [SLOW TEST:40.558 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:32:25.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 6 17:32:25.586: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2439d868-5045-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-6mqx8" to be "success or failure" Jan 6 17:32:25.589: INFO: Pod "downwardapi-volume-2439d868-5045-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.213361ms Jan 6 17:32:27.823: INFO: Pod "downwardapi-volume-2439d868-5045-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2373967s Jan 6 17:32:29.827: INFO: Pod "downwardapi-volume-2439d868-5045-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.241101343s STEP: Saw pod success Jan 6 17:32:29.827: INFO: Pod "downwardapi-volume-2439d868-5045-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:32:29.829: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-2439d868-5045-11eb-8655-0242ac110009 container client-container: STEP: delete the pod Jan 6 17:32:29.917: INFO: Waiting for pod downwardapi-volume-2439d868-5045-11eb-8655-0242ac110009 to disappear Jan 6 17:32:29.931: INFO: Pod downwardapi-volume-2439d868-5045-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:32:29.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6mqx8" for this suite. Jan 6 17:32:35.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:32:35.965: INFO: namespace: e2e-tests-projected-6mqx8, resource: bindings, ignored listing per whitelist Jan 6 17:32:36.031: INFO: namespace e2e-tests-projected-6mqx8 deletion completed in 6.092676417s • [SLOW TEST:10.555 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:32:36.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Jan 6 17:32:36.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 6 17:32:38.917: INFO: stderr: "" Jan 6 17:32:38.917: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:43795\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:43795/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:32:38.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-s2zz2" for this suite. Jan 6 17:32:44.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:32:44.957: INFO: namespace: e2e-tests-kubectl-s2zz2, resource: bindings, ignored listing per whitelist Jan 6 17:32:45.040: INFO: namespace e2e-tests-kubectl-s2zz2 deletion completed in 6.119343186s • [SLOW TEST:9.009 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:32:45.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Jan 6 17:32:45.120: INFO: Waiting up to 5m0s for pod "client-containers-2fde944e-5045-11eb-8655-0242ac110009" in namespace "e2e-tests-containers-mzpff" to be "success or failure" Jan 6 17:32:45.136: INFO: Pod "client-containers-2fde944e-5045-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.101968ms Jan 6 17:32:47.146: INFO: Pod "client-containers-2fde944e-5045-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026454458s Jan 6 17:32:49.152: INFO: Pod "client-containers-2fde944e-5045-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032662152s STEP: Saw pod success Jan 6 17:32:49.153: INFO: Pod "client-containers-2fde944e-5045-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:32:49.155: INFO: Trying to get logs from node hunter-worker pod client-containers-2fde944e-5045-11eb-8655-0242ac110009 container test-container: STEP: delete the pod Jan 6 17:32:49.171: INFO: Waiting for pod client-containers-2fde944e-5045-11eb-8655-0242ac110009 to disappear Jan 6 17:32:49.176: INFO: Pod client-containers-2fde944e-5045-11eb-8655-0242ac110009 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:32:49.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-mzpff" for this suite. Jan 6 17:32:55.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:32:55.287: INFO: namespace: e2e-tests-containers-mzpff, resource: bindings, ignored listing per whitelist Jan 6 17:32:55.319: INFO: namespace e2e-tests-containers-mzpff deletion completed in 6.139438965s • [SLOW TEST:10.278 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:32:55.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 6 17:32:55.418: INFO: Waiting up to 5m0s for pod "pod-36023978-5045-11eb-8655-0242ac110009" in namespace "e2e-tests-emptydir-25gtv" to be "success or failure" Jan 6 17:32:55.422: INFO: Pod "pod-36023978-5045-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.728294ms Jan 6 17:32:57.426: INFO: Pod "pod-36023978-5045-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007785635s Jan 6 17:32:59.430: INFO: Pod "pod-36023978-5045-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011661405s STEP: Saw pod success Jan 6 17:32:59.430: INFO: Pod "pod-36023978-5045-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:32:59.433: INFO: Trying to get logs from node hunter-worker pod pod-36023978-5045-11eb-8655-0242ac110009 container test-container: STEP: delete the pod Jan 6 17:32:59.462: INFO: Waiting for pod pod-36023978-5045-11eb-8655-0242ac110009 to disappear Jan 6 17:32:59.471: INFO: Pod pod-36023978-5045-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:32:59.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-25gtv" for this suite. Jan 6 17:33:05.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:33:05.522: INFO: namespace: e2e-tests-emptydir-25gtv, resource: bindings, ignored listing per whitelist Jan 6 17:33:05.591: INFO: namespace e2e-tests-emptydir-25gtv deletion completed in 6.117164775s • [SLOW TEST:10.271 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:33:05.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:33:09.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-jp28x" for this suite. Jan 6 17:33:49.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:33:49.798: INFO: namespace: e2e-tests-kubelet-test-jp28x, resource: bindings, ignored listing per whitelist Jan 6 17:33:49.812: INFO: namespace e2e-tests-kubelet-test-jp28x deletion completed in 40.100546854s • [SLOW TEST:44.221 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:33:49.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jan 6 17:33:53.916: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-567c500e-5045-11eb-8655-0242ac110009,GenerateName:,Namespace:e2e-tests-events-98h9f,SelfLink:/api/v1/namespaces/e2e-tests-events-98h9f/pods/send-events-567c500e-5045-11eb-8655-0242ac110009,UID:567dd26f-5045-11eb-8302-0242ac120002,ResourceVersion:18051101,Generation:0,CreationTimestamp:2021-01-06 17:33:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 888525100,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p77hw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p77hw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-p77hw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020d6c90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020d6cb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 17:33:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 17:33:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 17:33:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 17:33:49 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.230,StartTime:2021-01-06 17:33:49 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2021-01-06 17:33:52 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://ac2d346e518b084d2ee28c34a8ed922c8a2745bfa272e434bf32cc1b3939679e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jan 6 17:33:55.922: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jan 6 17:33:57.927: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:33:57.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-98h9f" for this suite. Jan 6 17:34:35.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:34:35.996: INFO: namespace: e2e-tests-events-98h9f, resource: bindings, ignored listing per whitelist Jan 6 17:34:36.062: INFO: namespace e2e-tests-events-98h9f deletion completed in 38.126192159s • [SLOW TEST:46.250 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:34:36.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-720efc5e-5045-11eb-8655-0242ac110009 STEP: Creating a pod to test consume secrets Jan 6 17:34:36.158: INFO: Waiting up to 5m0s for pod "pod-secrets-720f60c8-5045-11eb-8655-0242ac110009" in namespace "e2e-tests-secrets-btfqp" to be "success or failure" Jan 6 17:34:36.192: INFO: Pod "pod-secrets-720f60c8-5045-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 34.138885ms Jan 6 17:34:38.331: INFO: Pod "pod-secrets-720f60c8-5045-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172925628s Jan 6 17:34:40.336: INFO: Pod "pod-secrets-720f60c8-5045-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.178353453s STEP: Saw pod success Jan 6 17:34:40.336: INFO: Pod "pod-secrets-720f60c8-5045-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:34:40.338: INFO: Trying to get logs from node hunter-worker pod pod-secrets-720f60c8-5045-11eb-8655-0242ac110009 container secret-volume-test: STEP: delete the pod Jan 6 17:34:40.354: INFO: Waiting for pod pod-secrets-720f60c8-5045-11eb-8655-0242ac110009 to disappear Jan 6 17:34:40.358: INFO: Pod pod-secrets-720f60c8-5045-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:34:40.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-btfqp" for this suite. Jan 6 17:34:46.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:34:46.414: INFO: namespace: e2e-tests-secrets-btfqp, resource: bindings, ignored listing per whitelist Jan 6 17:34:46.477: INFO: namespace e2e-tests-secrets-btfqp deletion completed in 6.115375154s • [SLOW TEST:10.414 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:34:46.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jan 6 17:34:46.581: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 6 17:34:46.599: INFO: Waiting for terminating namespaces to be deleted... Jan 6 17:34:46.601: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jan 6 17:34:46.607: INFO: kube-proxy-ljths from kube-system started at 2020-09-23 08:24:25 +0000 UTC (1 container statuses recorded) Jan 6 17:34:46.607: INFO: Container kube-proxy ready: true, restart count 0 Jan 6 17:34:46.608: INFO: kindnet-8chxg from kube-system started at 2020-09-23 08:24:25 +0000 UTC (1 container statuses recorded) Jan 6 17:34:46.608: INFO: Container kindnet-cni ready: true, restart count 0 Jan 6 17:34:46.608: INFO: chaos-daemon-6czfr from default started at 2020-11-23 03:40:45 +0000 UTC (1 container statuses recorded) Jan 6 17:34:46.608: INFO: Container chaos-daemon ready: true, restart count 0 Jan 6 17:34:46.608: INFO: coredns-54ff9cd656-grddq from kube-system started at 2020-09-23 08:24:45 +0000 UTC (1 container statuses recorded) Jan 6 17:34:46.608: INFO: Container coredns ready: true, restart count 0 Jan 6 17:34:46.608: INFO: coredns-54ff9cd656-mplq2 from kube-system started at 2020-09-23 08:24:45 +0000 UTC (1 container statuses recorded) Jan 6 17:34:46.608: INFO: Container coredns ready: true, restart count 0 Jan 6 17:34:46.608: INFO: local-path-provisioner-65f5ddcc-46m7g from local-path-storage started at 2020-09-23 08:24:45 +0000 UTC (1 container statuses recorded) Jan 6 17:34:46.608: INFO: Container local-path-provisioner ready: true, restart count 41 Jan 6 17:34:46.608: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jan 6 17:34:46.614: INFO: coredns-coredns-5d8cb876b4-kkw4n from startup-test started at 2021-01-01 20:30:53 +0000 UTC (1 container statuses recorded) Jan 6 17:34:46.614: INFO: Container coredns ready: true, restart count 0 Jan 6 17:34:46.614: INFO: chaos-daemon-9ptbc from default started at 2020-11-23 03:40:45 +0000 UTC (1 container statuses recorded) Jan 6 17:34:46.614: INFO: Container chaos-daemon ready: true, restart count 0 Jan 6 17:34:46.614: INFO: chaos-controller-manager-5c78c48d45-tq7m7 from default started at 2020-11-23 03:40:45 +0000 UTC (1 container statuses recorded) Jan 6 17:34:46.614: INFO: Container chaos-mesh ready: true, restart count 0 Jan 6 17:34:46.614: INFO: kindnet-8vqrg from kube-system started at 2020-09-23 08:24:26 +0000 UTC (1 container statuses recorded) Jan 6 17:34:46.614: INFO: Container kindnet-cni ready: true, restart count 0 Jan 6 17:34:46.614: INFO: kube-proxy-mg87j from kube-system started at 2020-09-23 08:24:25 +0000 UTC (1 container statuses recorded) Jan 6 17:34:46.614: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7ab708d7-5045-11eb-8655-0242ac110009 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-7ab708d7-5045-11eb-8655-0242ac110009 off the node hunter-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-7ab708d7-5045-11eb-8655-0242ac110009 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:34:54.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-qplqt" for this suite. Jan 6 17:35:08.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:35:08.836: INFO: namespace: e2e-tests-sched-pred-qplqt, resource: bindings, ignored listing per whitelist Jan 6 17:35:08.856: INFO: namespace e2e-tests-sched-pred-qplqt deletion completed in 14.114006194s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:22.378 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:35:08.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 6 17:35:09.039: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85a4c10f-5045-11eb-8655-0242ac110009" in namespace "e2e-tests-downward-api-qmgzt" to be "success or failure" Jan 6 17:35:09.059: INFO: Pod "downwardapi-volume-85a4c10f-5045-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 19.982242ms Jan 6 17:35:11.063: INFO: Pod "downwardapi-volume-85a4c10f-5045-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024005344s Jan 6 17:35:13.067: INFO: Pod "downwardapi-volume-85a4c10f-5045-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028392842s STEP: Saw pod success Jan 6 17:35:13.067: INFO: Pod "downwardapi-volume-85a4c10f-5045-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:35:13.070: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-85a4c10f-5045-11eb-8655-0242ac110009 container client-container: STEP: delete the pod Jan 6 17:35:13.134: INFO: Waiting for pod downwardapi-volume-85a4c10f-5045-11eb-8655-0242ac110009 to disappear Jan 6 17:35:13.311: INFO: Pod downwardapi-volume-85a4c10f-5045-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:35:13.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qmgzt" for this suite. Jan 6 17:35:19.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:35:19.398: INFO: namespace: e2e-tests-downward-api-qmgzt, resource: bindings, ignored listing per whitelist Jan 6 17:35:19.474: INFO: namespace e2e-tests-downward-api-qmgzt deletion completed in 6.160450623s • [SLOW TEST:10.618 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:35:19.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Jan 6 17:35:19.574: INFO: Waiting up to 5m0s for pod "client-containers-8bef5f28-5045-11eb-8655-0242ac110009" in namespace "e2e-tests-containers-s7gw4" to be "success or failure" Jan 6 17:35:19.578: INFO: Pod "client-containers-8bef5f28-5045-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.92975ms Jan 6 17:35:21.659: INFO: Pod "client-containers-8bef5f28-5045-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085099459s Jan 6 17:35:23.663: INFO: Pod "client-containers-8bef5f28-5045-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089301221s STEP: Saw pod success Jan 6 17:35:23.663: INFO: Pod "client-containers-8bef5f28-5045-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:35:23.666: INFO: Trying to get logs from node hunter-worker pod client-containers-8bef5f28-5045-11eb-8655-0242ac110009 container test-container: STEP: delete the pod Jan 6 17:35:23.722: INFO: Waiting for pod client-containers-8bef5f28-5045-11eb-8655-0242ac110009 to disappear Jan 6 17:35:23.840: INFO: Pod client-containers-8bef5f28-5045-11eb-8655-0242ac110009 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:35:23.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-s7gw4" for this suite. Jan 6 17:35:29.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:35:29.923: INFO: namespace: e2e-tests-containers-s7gw4, resource: bindings, ignored listing per whitelist Jan 6 17:35:29.946: INFO: namespace e2e-tests-containers-s7gw4 deletion completed in 6.101499841s • [SLOW TEST:10.471 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:35:29.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jan 6 17:35:30.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wggjp' Jan 6 17:35:30.431: INFO: stderr: "" Jan 6 17:35:30.431: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 6 17:35:31.435: INFO: Selector matched 1 pods for map[app:redis] Jan 6 17:35:31.435: INFO: Found 0 / 1 Jan 6 17:35:32.436: INFO: Selector matched 1 pods for map[app:redis] Jan 6 17:35:32.436: INFO: Found 0 / 1 Jan 6 17:35:33.481: INFO: Selector matched 1 pods for map[app:redis] Jan 6 17:35:33.481: INFO: Found 0 / 1 Jan 6 17:35:34.436: INFO: Selector matched 1 pods for map[app:redis] Jan 6 17:35:34.436: INFO: Found 1 / 1 Jan 6 17:35:34.436: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 6 17:35:34.439: INFO: Selector matched 1 pods for map[app:redis] Jan 6 17:35:34.439: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 6 17:35:34.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-h2sgh --namespace=e2e-tests-kubectl-wggjp -p {"metadata":{"annotations":{"x":"y"}}}' Jan 6 17:35:34.538: INFO: stderr: "" Jan 6 17:35:34.538: INFO: stdout: "pod/redis-master-h2sgh patched\n" STEP: checking annotations Jan 6 17:35:34.583: INFO: Selector matched 1 pods for map[app:redis] Jan 6 17:35:34.583: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:35:34.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wggjp" for this suite. Jan 6 17:35:56.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:35:56.638: INFO: namespace: e2e-tests-kubectl-wggjp, resource: bindings, ignored listing per whitelist Jan 6 17:35:56.685: INFO: namespace e2e-tests-kubectl-wggjp deletion completed in 22.098160576s • [SLOW TEST:26.738 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:35:56.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 6 17:35:56.834: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-qc6jr,SelfLink:/api/v1/namespaces/e2e-tests-watch-qc6jr/configmaps/e2e-watch-test-watch-closed,UID:a220290f-5045-11eb-8302-0242ac120002,ResourceVersion:18051511,Generation:0,CreationTimestamp:2021-01-06 17:35:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 6 17:35:56.835: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-qc6jr,SelfLink:/api/v1/namespaces/e2e-tests-watch-qc6jr/configmaps/e2e-watch-test-watch-closed,UID:a220290f-5045-11eb-8302-0242ac120002,ResourceVersion:18051512,Generation:0,CreationTimestamp:2021-01-06 17:35:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 6 17:35:56.900: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-qc6jr,SelfLink:/api/v1/namespaces/e2e-tests-watch-qc6jr/configmaps/e2e-watch-test-watch-closed,UID:a220290f-5045-11eb-8302-0242ac120002,ResourceVersion:18051514,Generation:0,CreationTimestamp:2021-01-06 17:35:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 6 17:35:56.900: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-qc6jr,SelfLink:/api/v1/namespaces/e2e-tests-watch-qc6jr/configmaps/e2e-watch-test-watch-closed,UID:a220290f-5045-11eb-8302-0242ac120002,ResourceVersion:18051515,Generation:0,CreationTimestamp:2021-01-06 17:35:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:35:56.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-qc6jr" for this suite. Jan 6 17:36:02.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:36:03.008: INFO: namespace: e2e-tests-watch-qc6jr, resource: bindings, ignored listing per whitelist Jan 6 17:36:03.012: INFO: namespace e2e-tests-watch-qc6jr deletion completed in 6.103668899s • [SLOW TEST:6.327 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:36:03.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 6 17:36:03.099: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a5e09f10-5045-11eb-8655-0242ac110009" in namespace "e2e-tests-downward-api-khttp" to be "success or failure" Jan 6 17:36:03.102: INFO: Pod "downwardapi-volume-a5e09f10-5045-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.056449ms Jan 6 17:36:05.279: INFO: Pod "downwardapi-volume-a5e09f10-5045-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179395857s Jan 6 17:36:07.283: INFO: Pod "downwardapi-volume-a5e09f10-5045-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.183209213s STEP: Saw pod success Jan 6 17:36:07.283: INFO: Pod "downwardapi-volume-a5e09f10-5045-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:36:07.285: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-a5e09f10-5045-11eb-8655-0242ac110009 container client-container: STEP: delete the pod Jan 6 17:36:07.363: INFO: Waiting for pod downwardapi-volume-a5e09f10-5045-11eb-8655-0242ac110009 to disappear Jan 6 17:36:07.379: INFO: Pod downwardapi-volume-a5e09f10-5045-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:36:07.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-khttp" for this suite. Jan 6 17:36:13.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:36:13.443: INFO: namespace: e2e-tests-downward-api-khttp, resource: bindings, ignored listing per whitelist Jan 6 17:36:13.503: INFO: namespace e2e-tests-downward-api-khttp deletion completed in 6.120282579s • [SLOW TEST:10.490 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:36:13.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-qzg7q [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-qzg7q STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-qzg7q STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-qzg7q STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-qzg7q Jan 6 17:36:17.735: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-qzg7q, name: ss-0, uid: ad915532-5045-11eb-8302-0242ac120002, status phase: Pending. Waiting for statefulset controller to delete. Jan 6 17:36:24.792: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-qzg7q, name: ss-0, uid: ad915532-5045-11eb-8302-0242ac120002, status phase: Failed. Waiting for statefulset controller to delete. Jan 6 17:36:24.805: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-qzg7q, name: ss-0, uid: ad915532-5045-11eb-8302-0242ac120002, status phase: Failed. Waiting for statefulset controller to delete. Jan 6 17:36:24.819: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-qzg7q STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-qzg7q STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-qzg7q and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 6 17:36:35.109: INFO: Deleting all statefulset in ns e2e-tests-statefulset-qzg7q Jan 6 17:36:35.111: INFO: Scaling statefulset ss to 0 Jan 6 17:36:45.133: INFO: Waiting for statefulset status.replicas updated to 0 Jan 6 17:36:45.136: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:36:45.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-qzg7q" for this suite. Jan 6 17:36:51.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:36:51.348: INFO: namespace: e2e-tests-statefulset-qzg7q, resource: bindings, ignored listing per whitelist Jan 6 17:36:51.348: INFO: namespace e2e-tests-statefulset-qzg7q deletion completed in 6.120937335s • [SLOW TEST:37.845 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:36:51.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jan 6 17:36:51.448: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 6 17:36:51.465: INFO: Waiting for terminating namespaces to be deleted... Jan 6 17:36:51.467: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jan 6 17:36:51.473: INFO: coredns-54ff9cd656-grddq from kube-system started at 2020-09-23 08:24:45 +0000 UTC (1 container statuses recorded) Jan 6 17:36:51.473: INFO: Container coredns ready: true, restart count 0 Jan 6 17:36:51.473: INFO: coredns-54ff9cd656-mplq2 from kube-system started at 2020-09-23 08:24:45 +0000 UTC (1 container statuses recorded) Jan 6 17:36:51.473: INFO: Container coredns ready: true, restart count 0 Jan 6 17:36:51.473: INFO: local-path-provisioner-65f5ddcc-46m7g from local-path-storage started at 2020-09-23 08:24:45 +0000 UTC (1 container statuses recorded) Jan 6 17:36:51.473: INFO: Container local-path-provisioner ready: true, restart count 41 Jan 6 17:36:51.473: INFO: kube-proxy-ljths from kube-system started at 2020-09-23 08:24:25 +0000 UTC (1 container statuses recorded) Jan 6 17:36:51.473: INFO: Container kube-proxy ready: true, restart count 0 Jan 6 17:36:51.473: INFO: kindnet-8chxg from kube-system started at 2020-09-23 08:24:25 +0000 UTC (1 container statuses recorded) Jan 6 17:36:51.473: INFO: Container kindnet-cni ready: true, restart count 0 Jan 6 17:36:51.473: INFO: chaos-daemon-6czfr from default started at 2020-11-23 03:40:45 +0000 UTC (1 container statuses recorded) Jan 6 17:36:51.473: INFO: Container chaos-daemon ready: true, restart count 0 Jan 6 17:36:51.473: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jan 6 17:36:51.478: INFO: chaos-controller-manager-5c78c48d45-tq7m7 from default started at 2020-11-23 03:40:45 +0000 UTC (1 container statuses recorded) Jan 6 17:36:51.478: INFO: Container chaos-mesh ready: true, restart count 0 Jan 6 17:36:51.478: INFO: chaos-daemon-9ptbc from default started at 2020-11-23 03:40:45 +0000 UTC (1 container statuses recorded) Jan 6 17:36:51.478: INFO: Container chaos-daemon ready: true, restart count 0 Jan 6 17:36:51.478: INFO: kube-proxy-mg87j from kube-system started at 2020-09-23 08:24:25 +0000 UTC (1 container statuses recorded) Jan 6 17:36:51.478: INFO: Container kube-proxy ready: true, restart count 0 Jan 6 17:36:51.478: INFO: kindnet-8vqrg from kube-system started at 2020-09-23 08:24:26 +0000 UTC (1 container statuses recorded) Jan 6 17:36:51.478: INFO: Container kindnet-cni ready: true, restart count 0 Jan 6 17:36:51.478: INFO: coredns-coredns-5d8cb876b4-kkw4n from startup-test started at 2021-01-01 20:30:53 +0000 UTC (1 container statuses recorded) Jan 6 17:36:51.478: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Jan 6 17:36:51.600: INFO: Pod chaos-controller-manager-5c78c48d45-tq7m7 requesting resource cpu=25m on Node hunter-worker2 Jan 6 17:36:51.600: INFO: Pod chaos-daemon-6czfr requesting resource cpu=0m on Node hunter-worker Jan 6 17:36:51.600: INFO: Pod chaos-daemon-9ptbc requesting resource cpu=0m on Node hunter-worker2 Jan 6 17:36:51.600: INFO: Pod coredns-54ff9cd656-grddq requesting resource cpu=100m on Node hunter-worker Jan 6 17:36:51.600: INFO: Pod coredns-54ff9cd656-mplq2 requesting resource cpu=100m on Node hunter-worker Jan 6 17:36:51.600: INFO: Pod kindnet-8chxg requesting resource cpu=100m on Node hunter-worker Jan 6 17:36:51.600: INFO: Pod kindnet-8vqrg requesting resource cpu=100m on Node hunter-worker2 Jan 6 17:36:51.600: INFO: Pod kube-proxy-ljths requesting resource cpu=0m on Node hunter-worker Jan 6 17:36:51.600: INFO: Pod kube-proxy-mg87j requesting resource cpu=0m on Node hunter-worker2 Jan 6 17:36:51.600: INFO: Pod local-path-provisioner-65f5ddcc-46m7g requesting resource cpu=0m on Node hunter-worker Jan 6 17:36:51.600: INFO: Pod coredns-coredns-5d8cb876b4-kkw4n requesting resource cpu=100m on Node hunter-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-c2cb6391-5045-11eb-8655-0242ac110009.1657b5307c4b256d], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-vhbzs/filler-pod-c2cb6391-5045-11eb-8655-0242ac110009 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-c2cb6391-5045-11eb-8655-0242ac110009.1657b5310ed51078], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c2cb6391-5045-11eb-8655-0242ac110009.1657b531577c14c7], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-c2cb6391-5045-11eb-8655-0242ac110009.1657b5316793cc6d], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-c2cdb675-5045-11eb-8655-0242ac110009.1657b5307c82e679], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-vhbzs/filler-pod-c2cdb675-5045-11eb-8655-0242ac110009 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-c2cdb675-5045-11eb-8655-0242ac110009.1657b530d198039f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c2cdb675-5045-11eb-8655-0242ac110009.1657b5312ce98e01], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-c2cdb675-5045-11eb-8655-0242ac110009.1657b5314ae3a7f6], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.1657b531e2fbe44c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:36:58.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-vhbzs" for this suite. Jan 6 17:37:04.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:37:04.890: INFO: namespace: e2e-tests-sched-pred-vhbzs, resource: bindings, ignored listing per whitelist Jan 6 17:37:04.924: INFO: namespace e2e-tests-sched-pred-vhbzs deletion completed in 6.117681615s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:13.575 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:37:04.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 6 17:37:05.266: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:37:13.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-9lp25" for this suite. Jan 6 17:37:35.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:37:35.824: INFO: namespace: e2e-tests-init-container-9lp25, resource: bindings, ignored listing per whitelist Jan 6 17:37:35.826: INFO: namespace e2e-tests-init-container-9lp25 deletion completed in 22.13977102s • [SLOW TEST:30.902 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:37:35.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 6 17:37:35.935: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd365fdc-5045-11eb-8655-0242ac110009" in namespace "e2e-tests-downward-api-wh8v8" to be "success or failure" Jan 6 17:37:35.947: INFO: Pod "downwardapi-volume-dd365fdc-5045-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 11.932056ms Jan 6 17:37:38.053: INFO: Pod "downwardapi-volume-dd365fdc-5045-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118134754s Jan 6 17:37:40.057: INFO: Pod "downwardapi-volume-dd365fdc-5045-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.122144968s STEP: Saw pod success Jan 6 17:37:40.057: INFO: Pod "downwardapi-volume-dd365fdc-5045-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:37:40.060: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-dd365fdc-5045-11eb-8655-0242ac110009 container client-container: STEP: delete the pod Jan 6 17:37:40.086: INFO: Waiting for pod downwardapi-volume-dd365fdc-5045-11eb-8655-0242ac110009 to disappear Jan 6 17:37:40.090: INFO: Pod downwardapi-volume-dd365fdc-5045-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:37:40.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wh8v8" for this suite. Jan 6 17:37:46.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:37:46.199: INFO: namespace: e2e-tests-downward-api-wh8v8, resource: bindings, ignored listing per whitelist Jan 6 17:37:46.233: INFO: namespace e2e-tests-downward-api-wh8v8 deletion completed in 6.139564584s • [SLOW TEST:10.407 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:37:46.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Jan 6 17:37:46.371: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jan 6 17:37:46.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4rwk5' Jan 6 17:37:46.680: INFO: stderr: "" Jan 6 17:37:46.680: INFO: stdout: "service/redis-slave created\n" Jan 6 17:37:46.680: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jan 6 17:37:46.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4rwk5' Jan 6 17:37:47.002: INFO: stderr: "" Jan 6 17:37:47.002: INFO: stdout: "service/redis-master created\n" Jan 6 17:37:47.003: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 6 17:37:47.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4rwk5' Jan 6 17:37:47.313: INFO: stderr: "" Jan 6 17:37:47.314: INFO: stdout: "service/frontend created\n" Jan 6 17:37:47.314: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jan 6 17:37:47.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4rwk5' Jan 6 17:37:47.598: INFO: stderr: "" Jan 6 17:37:47.598: INFO: stdout: "deployment.extensions/frontend created\n" Jan 6 17:37:47.598: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 6 17:37:47.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4rwk5' Jan 6 17:37:47.886: INFO: stderr: "" Jan 6 17:37:47.886: INFO: stdout: "deployment.extensions/redis-master created\n" Jan 6 17:37:47.886: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jan 6 17:37:47.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4rwk5' Jan 6 17:37:48.155: INFO: stderr: "" Jan 6 17:37:48.155: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Jan 6 17:37:48.155: INFO: Waiting for all frontend pods to be Running. Jan 6 17:37:58.206: INFO: Waiting for frontend to serve content. Jan 6 17:37:58.223: INFO: Trying to add a new entry to the guestbook. Jan 6 17:37:58.234: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jan 6 17:37:58.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4rwk5' Jan 6 17:37:58.491: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 6 17:37:58.491: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jan 6 17:37:58.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4rwk5' Jan 6 17:37:58.691: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 6 17:37:58.691: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 6 17:37:58.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4rwk5' Jan 6 17:37:58.845: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 6 17:37:58.845: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 6 17:37:58.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4rwk5' Jan 6 17:37:58.944: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 6 17:37:58.944: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 6 17:37:58.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4rwk5' Jan 6 17:37:59.063: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 6 17:37:59.063: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 6 17:37:59.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4rwk5' Jan 6 17:37:59.545: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 6 17:37:59.545: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:37:59.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4rwk5" for this suite. Jan 6 17:38:39.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:38:39.592: INFO: namespace: e2e-tests-kubectl-4rwk5, resource: bindings, ignored listing per whitelist Jan 6 17:38:39.659: INFO: namespace e2e-tests-kubectl-4rwk5 deletion completed in 40.107637406s • [SLOW TEST:53.426 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:38:39.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-h77jq [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Jan 6 17:38:39.788: INFO: Found 0 stateful pods, waiting for 3 Jan 6 17:38:49.918: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 6 17:38:49.918: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 6 17:38:49.918: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 6 17:38:59.794: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 6 17:38:59.794: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 6 17:38:59.794: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 6 17:38:59.820: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 6 17:39:09.904: INFO: Updating stateful set ss2 Jan 6 17:39:09.917: INFO: Waiting for Pod e2e-tests-statefulset-h77jq/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jan 6 17:39:20.042: INFO: Found 2 stateful pods, waiting for 3 Jan 6 17:39:30.046: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 6 17:39:30.046: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 6 17:39:30.046: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 6 17:39:30.071: INFO: Updating stateful set ss2 Jan 6 17:39:30.108: INFO: Waiting for Pod e2e-tests-statefulset-h77jq/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 6 17:39:40.131: INFO: Updating stateful set ss2 Jan 6 17:39:40.142: INFO: Waiting for StatefulSet e2e-tests-statefulset-h77jq/ss2 to complete update Jan 6 17:39:40.142: INFO: Waiting for Pod e2e-tests-statefulset-h77jq/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 6 17:39:50.149: INFO: Deleting all statefulset in ns e2e-tests-statefulset-h77jq Jan 6 17:39:50.152: INFO: Scaling statefulset ss2 to 0 Jan 6 17:40:10.168: INFO: Waiting for statefulset status.replicas updated to 0 Jan 6 17:40:10.171: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:40:10.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-h77jq" for this suite. Jan 6 17:40:16.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:40:16.296: INFO: namespace: e2e-tests-statefulset-h77jq, resource: bindings, ignored listing per whitelist Jan 6 17:40:16.317: INFO: namespace e2e-tests-statefulset-h77jq deletion completed in 6.109793885s • [SLOW TEST:96.657 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:40:16.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 6 17:40:16.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-jbpgq' Jan 6 17:40:16.555: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 6 17:40:16.555: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Jan 6 17:40:16.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-jbpgq' Jan 6 17:40:16.687: INFO: stderr: "" Jan 6 17:40:16.687: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:40:16.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jbpgq" for this suite. Jan 6 17:40:30.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:40:30.839: INFO: namespace: e2e-tests-kubectl-jbpgq, resource: bindings, ignored listing per whitelist Jan 6 17:40:30.855: INFO: namespace e2e-tests-kubectl-jbpgq deletion completed in 14.160410724s • [SLOW TEST:14.538 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:40:30.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:40:31.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-q7fk8" for this suite. Jan 6 17:40:37.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:40:37.136: INFO: namespace: e2e-tests-kubelet-test-q7fk8, resource: bindings, ignored listing per whitelist Jan 6 17:40:37.215: INFO: namespace e2e-tests-kubelet-test-q7fk8 deletion completed in 6.10638072s • [SLOW TEST:6.359 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:40:37.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 6 17:40:37.333: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4955e1a3-5046-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-824wk" to be "success or failure" Jan 6 17:40:37.338: INFO: Pod "downwardapi-volume-4955e1a3-5046-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.851128ms Jan 6 17:40:39.342: INFO: Pod "downwardapi-volume-4955e1a3-5046-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009100319s Jan 6 17:40:41.345: INFO: Pod "downwardapi-volume-4955e1a3-5046-11eb-8655-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.01255375s Jan 6 17:40:43.350: INFO: Pod "downwardapi-volume-4955e1a3-5046-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01689125s STEP: Saw pod success Jan 6 17:40:43.350: INFO: Pod "downwardapi-volume-4955e1a3-5046-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:40:43.353: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-4955e1a3-5046-11eb-8655-0242ac110009 container client-container: STEP: delete the pod Jan 6 17:40:43.396: INFO: Waiting for pod downwardapi-volume-4955e1a3-5046-11eb-8655-0242ac110009 to disappear Jan 6 17:40:43.418: INFO: Pod downwardapi-volume-4955e1a3-5046-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:40:43.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-824wk" for this suite. Jan 6 17:40:49.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:40:49.451: INFO: namespace: e2e-tests-projected-824wk, resource: bindings, ignored listing per whitelist Jan 6 17:40:49.531: INFO: namespace e2e-tests-projected-824wk deletion completed in 6.108892724s • [SLOW TEST:12.316 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:40:49.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 6 17:40:49.615: INFO: Creating deployment "test-recreate-deployment" Jan 6 17:40:49.630: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 6 17:40:49.638: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Jan 6 17:40:51.645: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 6 17:40:51.648: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745551649, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745551649, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745551649, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745551649, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 17:40:53.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745551649, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745551649, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745551649, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745551649, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 17:40:55.651: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 6 17:40:55.659: INFO: Updating deployment test-recreate-deployment Jan 6 17:40:55.659: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 6 17:40:56.380: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-6hzxh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6hzxh/deployments/test-recreate-deployment,UID:50a9d381-5046-11eb-8302-0242ac120002,ResourceVersion:18052902,Generation:2,CreationTimestamp:2021-01-06 17:40:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2021-01-06 17:40:55 +0000 UTC 2021-01-06 17:40:55 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2021-01-06 17:40:56 +0000 UTC 2021-01-06 17:40:49 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jan 6 17:40:56.387: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-6hzxh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6hzxh/replicasets/test-recreate-deployment-589c4bfd,UID:5458cdd3-5046-11eb-8302-0242ac120002,ResourceVersion:18052899,Generation:1,CreationTimestamp:2021-01-06 17:40:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 50a9d381-5046-11eb-8302-0242ac120002 0xc00171bd1f 0xc00171bd30}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 6 17:40:56.387: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 6 17:40:56.388: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-6hzxh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6hzxh/replicasets/test-recreate-deployment-5bf7f65dc,UID:50aced8f-5046-11eb-8302-0242ac120002,ResourceVersion:18052890,Generation:2,CreationTimestamp:2021-01-06 17:40:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 50a9d381-5046-11eb-8302-0242ac120002 0xc00171bdf0 0xc00171bdf1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 6 17:40:56.391: INFO: Pod "test-recreate-deployment-589c4bfd-g92zs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-g92zs,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-6hzxh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6hzxh/pods/test-recreate-deployment-589c4bfd-g92zs,UID:54686ac3-5046-11eb-8302-0242ac120002,ResourceVersion:18052904,Generation:0,CreationTimestamp:2021-01-06 17:40:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 5458cdd3-5046-11eb-8302-0242ac120002 0xc0023f069f 0xc0023f06b0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-sq8vb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sq8vb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-sq8vb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023f0720} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023f0740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 17:40:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 17:40:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 17:40:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 17:40:55 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-01-06 17:40:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:40:56.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-6hzxh" for this suite. Jan 6 17:41:04.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:41:04.419: INFO: namespace: e2e-tests-deployment-6hzxh, resource: bindings, ignored listing per whitelist Jan 6 17:41:04.537: INFO: namespace e2e-tests-deployment-6hzxh deletion completed in 8.140144172s • [SLOW TEST:15.006 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:41:04.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 6 17:41:04.691: INFO: Waiting up to 5m0s for pod "pod-599c0dcb-5046-11eb-8655-0242ac110009" in namespace "e2e-tests-emptydir-k7cr8" to be "success or failure" Jan 6 17:41:04.696: INFO: Pod "pod-599c0dcb-5046-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118504ms Jan 6 17:41:06.699: INFO: Pod "pod-599c0dcb-5046-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007930086s Jan 6 17:41:08.704: INFO: Pod "pod-599c0dcb-5046-11eb-8655-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.012071113s Jan 6 17:41:10.707: INFO: Pod "pod-599c0dcb-5046-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015910549s STEP: Saw pod success Jan 6 17:41:10.707: INFO: Pod "pod-599c0dcb-5046-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:41:10.710: INFO: Trying to get logs from node hunter-worker2 pod pod-599c0dcb-5046-11eb-8655-0242ac110009 container test-container: STEP: delete the pod Jan 6 17:41:10.751: INFO: Waiting for pod pod-599c0dcb-5046-11eb-8655-0242ac110009 to disappear Jan 6 17:41:10.761: INFO: Pod pod-599c0dcb-5046-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:41:10.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-k7cr8" for this suite. Jan 6 17:41:16.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:41:16.839: INFO: namespace: e2e-tests-emptydir-k7cr8, resource: bindings, ignored listing per whitelist Jan 6 17:41:16.853: INFO: namespace e2e-tests-emptydir-k7cr8 deletion completed in 6.087847358s • [SLOW TEST:12.316 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:41:16.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:41:20.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-8nfbt" for this suite. Jan 6 17:42:07.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:42:07.064: INFO: namespace: e2e-tests-kubelet-test-8nfbt, resource: bindings, ignored listing per whitelist Jan 6 17:42:07.110: INFO: namespace e2e-tests-kubelet-test-8nfbt deletion completed in 46.107981784s • [SLOW TEST:50.257 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:42:07.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-2f79v Jan 6 17:42:11.264: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-2f79v STEP: checking the pod's current state and verifying that restartCount is present Jan 6 17:42:11.267: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:46:12.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-2f79v" for this suite. Jan 6 17:46:18.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:46:18.119: INFO: namespace: e2e-tests-container-probe-2f79v, resource: bindings, ignored listing per whitelist Jan 6 17:46:18.127: INFO: namespace e2e-tests-container-probe-2f79v deletion completed in 6.097197519s • [SLOW TEST:251.017 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:46:18.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 6 17:46:18.306: INFO: Waiting up to 5m0s for pod "pod-148c9f55-5047-11eb-8655-0242ac110009" in namespace "e2e-tests-emptydir-mt884" to be "success or failure" Jan 6 17:46:18.317: INFO: Pod "pod-148c9f55-5047-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 10.370948ms Jan 6 17:46:20.321: INFO: Pod "pod-148c9f55-5047-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014639203s Jan 6 17:46:22.328: INFO: Pod "pod-148c9f55-5047-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021277955s STEP: Saw pod success Jan 6 17:46:22.328: INFO: Pod "pod-148c9f55-5047-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:46:22.330: INFO: Trying to get logs from node hunter-worker2 pod pod-148c9f55-5047-11eb-8655-0242ac110009 container test-container: STEP: delete the pod Jan 6 17:46:22.348: INFO: Waiting for pod pod-148c9f55-5047-11eb-8655-0242ac110009 to disappear Jan 6 17:46:22.391: INFO: Pod pod-148c9f55-5047-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:46:22.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mt884" for this suite. Jan 6 17:46:28.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:46:28.448: INFO: namespace: e2e-tests-emptydir-mt884, resource: bindings, ignored listing per whitelist Jan 6 17:46:28.495: INFO: namespace e2e-tests-emptydir-mt884 deletion completed in 6.100886071s • [SLOW TEST:10.368 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:46:28.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-1ab2ee24-5047-11eb-8655-0242ac110009 STEP: Creating a pod to test consume configMaps Jan 6 17:46:28.624: INFO: Waiting up to 5m0s for pod "pod-configmaps-1ab5cba1-5047-11eb-8655-0242ac110009" in namespace "e2e-tests-configmap-hdnxs" to be "success or failure" Jan 6 17:46:28.628: INFO: Pod "pod-configmaps-1ab5cba1-5047-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.078309ms Jan 6 17:46:30.631: INFO: Pod "pod-configmaps-1ab5cba1-5047-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00649501s Jan 6 17:46:32.636: INFO: Pod "pod-configmaps-1ab5cba1-5047-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011182401s STEP: Saw pod success Jan 6 17:46:32.636: INFO: Pod "pod-configmaps-1ab5cba1-5047-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:46:32.640: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-1ab5cba1-5047-11eb-8655-0242ac110009 container configmap-volume-test: STEP: delete the pod Jan 6 17:46:32.979: INFO: Waiting for pod pod-configmaps-1ab5cba1-5047-11eb-8655-0242ac110009 to disappear Jan 6 17:46:32.982: INFO: Pod pod-configmaps-1ab5cba1-5047-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:46:32.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hdnxs" for this suite. Jan 6 17:46:39.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:46:39.084: INFO: namespace: e2e-tests-configmap-hdnxs, resource: bindings, ignored listing per whitelist Jan 6 17:46:39.182: INFO: namespace e2e-tests-configmap-hdnxs deletion completed in 6.196989458s • [SLOW TEST:10.687 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:46:39.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jan 6 17:46:39.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:46:42.056: INFO: stderr: "" Jan 6 17:46:42.056: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 6 17:46:42.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:46:42.193: INFO: stderr: "" Jan 6 17:46:42.193: INFO: stdout: "update-demo-nautilus-8jjgp update-demo-nautilus-ltlsx " Jan 6 17:46:42.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8jjgp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:46:42.305: INFO: stderr: "" Jan 6 17:46:42.305: INFO: stdout: "" Jan 6 17:46:42.305: INFO: update-demo-nautilus-8jjgp is created but not running Jan 6 17:46:47.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:46:47.411: INFO: stderr: "" Jan 6 17:46:47.411: INFO: stdout: "update-demo-nautilus-8jjgp update-demo-nautilus-ltlsx " Jan 6 17:46:47.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8jjgp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:46:47.522: INFO: stderr: "" Jan 6 17:46:47.522: INFO: stdout: "true" Jan 6 17:46:47.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8jjgp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:46:47.603: INFO: stderr: "" Jan 6 17:46:47.603: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 6 17:46:47.603: INFO: validating pod update-demo-nautilus-8jjgp Jan 6 17:46:47.607: INFO: got data: { "image": "nautilus.jpg" } Jan 6 17:46:47.607: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 6 17:46:47.607: INFO: update-demo-nautilus-8jjgp is verified up and running Jan 6 17:46:47.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ltlsx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:46:47.698: INFO: stderr: "" Jan 6 17:46:47.698: INFO: stdout: "true" Jan 6 17:46:47.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ltlsx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:46:47.805: INFO: stderr: "" Jan 6 17:46:47.805: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 6 17:46:47.805: INFO: validating pod update-demo-nautilus-ltlsx Jan 6 17:46:47.809: INFO: got data: { "image": "nautilus.jpg" } Jan 6 17:46:47.809: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 6 17:46:47.810: INFO: update-demo-nautilus-ltlsx is verified up and running STEP: scaling down the replication controller Jan 6 17:46:47.813: INFO: scanned /root for discovery docs: Jan 6 17:46:47.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:46:48.973: INFO: stderr: "" Jan 6 17:46:48.973: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 6 17:46:48.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:46:49.294: INFO: stderr: "" Jan 6 17:46:49.294: INFO: stdout: "update-demo-nautilus-8jjgp update-demo-nautilus-ltlsx " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 6 17:46:54.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:46:54.413: INFO: stderr: "" Jan 6 17:46:54.414: INFO: stdout: "update-demo-nautilus-8jjgp update-demo-nautilus-ltlsx " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 6 17:46:59.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:46:59.514: INFO: stderr: "" Jan 6 17:46:59.514: INFO: stdout: "update-demo-nautilus-ltlsx " Jan 6 17:46:59.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ltlsx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:46:59.617: INFO: stderr: "" Jan 6 17:46:59.617: INFO: stdout: "true" Jan 6 17:46:59.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ltlsx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:46:59.708: INFO: stderr: "" Jan 6 17:46:59.708: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 6 17:46:59.708: INFO: validating pod update-demo-nautilus-ltlsx Jan 6 17:46:59.711: INFO: got data: { "image": "nautilus.jpg" } Jan 6 17:46:59.711: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 6 17:46:59.711: INFO: update-demo-nautilus-ltlsx is verified up and running STEP: scaling up the replication controller Jan 6 17:46:59.713: INFO: scanned /root for discovery docs: Jan 6 17:46:59.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:47:00.842: INFO: stderr: "" Jan 6 17:47:00.842: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 6 17:47:00.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:47:00.940: INFO: stderr: "" Jan 6 17:47:00.940: INFO: stdout: "update-demo-nautilus-ltlsx update-demo-nautilus-wzd8n " Jan 6 17:47:00.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ltlsx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:47:01.054: INFO: stderr: "" Jan 6 17:47:01.054: INFO: stdout: "true" Jan 6 17:47:01.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ltlsx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:47:01.156: INFO: stderr: "" Jan 6 17:47:01.156: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 6 17:47:01.157: INFO: validating pod update-demo-nautilus-ltlsx Jan 6 17:47:01.159: INFO: got data: { "image": "nautilus.jpg" } Jan 6 17:47:01.159: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 6 17:47:01.159: INFO: update-demo-nautilus-ltlsx is verified up and running Jan 6 17:47:01.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzd8n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:47:01.252: INFO: stderr: "" Jan 6 17:47:01.252: INFO: stdout: "" Jan 6 17:47:01.252: INFO: update-demo-nautilus-wzd8n is created but not running Jan 6 17:47:06.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:47:06.375: INFO: stderr: "" Jan 6 17:47:06.375: INFO: stdout: "update-demo-nautilus-ltlsx update-demo-nautilus-wzd8n " Jan 6 17:47:06.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ltlsx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:47:06.484: INFO: stderr: "" Jan 6 17:47:06.484: INFO: stdout: "true" Jan 6 17:47:06.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ltlsx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:47:06.578: INFO: stderr: "" Jan 6 17:47:06.578: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 6 17:47:06.578: INFO: validating pod update-demo-nautilus-ltlsx Jan 6 17:47:06.582: INFO: got data: { "image": "nautilus.jpg" } Jan 6 17:47:06.582: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 6 17:47:06.582: INFO: update-demo-nautilus-ltlsx is verified up and running Jan 6 17:47:06.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzd8n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:47:06.688: INFO: stderr: "" Jan 6 17:47:06.688: INFO: stdout: "true" Jan 6 17:47:06.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzd8n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:47:06.784: INFO: stderr: "" Jan 6 17:47:06.784: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 6 17:47:06.784: INFO: validating pod update-demo-nautilus-wzd8n Jan 6 17:47:06.788: INFO: got data: { "image": "nautilus.jpg" } Jan 6 17:47:06.788: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 6 17:47:06.788: INFO: update-demo-nautilus-wzd8n is verified up and running STEP: using delete to clean up resources Jan 6 17:47:06.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:47:06.903: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 6 17:47:06.903: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 6 17:47:06.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-jvhpg' Jan 6 17:47:07.000: INFO: stderr: "No resources found.\n" Jan 6 17:47:07.000: INFO: stdout: "" Jan 6 17:47:07.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-jvhpg -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 6 17:47:07.101: INFO: stderr: "" Jan 6 17:47:07.101: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:47:07.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jvhpg" for this suite. Jan 6 17:47:13.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:47:13.322: INFO: namespace: e2e-tests-kubectl-jvhpg, resource: bindings, ignored listing per whitelist Jan 6 17:47:13.390: INFO: namespace e2e-tests-kubectl-jvhpg deletion completed in 6.285640503s • [SLOW TEST:34.207 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:47:13.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 6 17:47:13.481: INFO: Waiting up to 5m0s for pod "downward-api-35749fa8-5047-11eb-8655-0242ac110009" in namespace "e2e-tests-downward-api-2wt58" to be "success or failure" Jan 6 17:47:13.492: INFO: Pod "downward-api-35749fa8-5047-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 10.291861ms Jan 6 17:47:15.505: INFO: Pod "downward-api-35749fa8-5047-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024037808s Jan 6 17:47:17.514: INFO: Pod "downward-api-35749fa8-5047-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032472936s STEP: Saw pod success Jan 6 17:47:17.514: INFO: Pod "downward-api-35749fa8-5047-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:47:17.517: INFO: Trying to get logs from node hunter-worker2 pod downward-api-35749fa8-5047-11eb-8655-0242ac110009 container dapi-container: STEP: delete the pod Jan 6 17:47:17.546: INFO: Waiting for pod downward-api-35749fa8-5047-11eb-8655-0242ac110009 to disappear Jan 6 17:47:17.551: INFO: Pod downward-api-35749fa8-5047-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:47:17.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2wt58" for this suite. Jan 6 17:47:23.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:47:23.585: INFO: namespace: e2e-tests-downward-api-2wt58, resource: bindings, ignored listing per whitelist Jan 6 17:47:23.661: INFO: namespace e2e-tests-downward-api-2wt58 deletion completed in 6.107130529s • [SLOW TEST:10.271 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:47:23.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 6 17:47:23.806: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:47:23.808: INFO: Number of nodes with available pods: 0 Jan 6 17:47:23.808: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:47:24.813: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:47:24.816: INFO: Number of nodes with available pods: 0 Jan 6 17:47:24.816: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:47:25.813: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:47:25.816: INFO: Number of nodes with available pods: 0 Jan 6 17:47:25.816: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:47:26.813: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:47:26.817: INFO: Number of nodes with available pods: 0 Jan 6 17:47:26.817: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:47:27.812: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:47:27.816: INFO: Number of nodes with available pods: 1 Jan 6 17:47:27.816: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:47:28.812: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:47:28.815: INFO: Number of nodes with available pods: 2 Jan 6 17:47:28.815: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 6 17:47:28.865: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:47:28.881: INFO: Number of nodes with available pods: 2 Jan 6 17:47:28.881: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-52dc5, will wait for the garbage collector to delete the pods Jan 6 17:47:29.967: INFO: Deleting DaemonSet.extensions daemon-set took: 8.443458ms Jan 6 17:47:30.168: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.287544ms Jan 6 17:47:34.870: INFO: Number of nodes with available pods: 0 Jan 6 17:47:34.870: INFO: Number of running nodes: 0, number of available pods: 0 Jan 6 17:47:34.873: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-52dc5/daemonsets","resourceVersion":"18053970"},"items":null} Jan 6 17:47:34.875: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-52dc5/pods","resourceVersion":"18053970"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:47:34.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-52dc5" for this suite. Jan 6 17:47:40.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:47:40.968: INFO: namespace: e2e-tests-daemonsets-52dc5, resource: bindings, ignored listing per whitelist Jan 6 17:47:40.986: INFO: namespace e2e-tests-daemonsets-52dc5 deletion completed in 6.100430706s • [SLOW TEST:17.325 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:47:40.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-45ebf1c9-5047-11eb-8655-0242ac110009 STEP: Creating secret with name s-test-opt-upd-45ebf223-5047-11eb-8655-0242ac110009 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-45ebf1c9-5047-11eb-8655-0242ac110009 STEP: Updating secret s-test-opt-upd-45ebf223-5047-11eb-8655-0242ac110009 STEP: Creating secret with name s-test-opt-create-45ebf23a-5047-11eb-8655-0242ac110009 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:47:49.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-plkjx" for this suite. Jan 6 17:48:11.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:48:11.389: INFO: namespace: e2e-tests-secrets-plkjx, resource: bindings, ignored listing per whitelist Jan 6 17:48:11.431: INFO: namespace e2e-tests-secrets-plkjx deletion completed in 22.108869526s • [SLOW TEST:30.445 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:48:11.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Jan 6 17:48:11.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dg9kk' Jan 6 17:48:11.833: INFO: stderr: "" Jan 6 17:48:11.833: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Jan 6 17:48:12.838: INFO: Selector matched 1 pods for map[app:redis] Jan 6 17:48:12.838: INFO: Found 0 / 1 Jan 6 17:48:13.895: INFO: Selector matched 1 pods for map[app:redis] Jan 6 17:48:13.895: INFO: Found 0 / 1 Jan 6 17:48:14.838: INFO: Selector matched 1 pods for map[app:redis] Jan 6 17:48:14.838: INFO: Found 0 / 1 Jan 6 17:48:15.838: INFO: Selector matched 1 pods for map[app:redis] Jan 6 17:48:15.838: INFO: Found 1 / 1 Jan 6 17:48:15.838: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 6 17:48:15.841: INFO: Selector matched 1 pods for map[app:redis] Jan 6 17:48:15.841: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jan 6 17:48:15.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mgk2g redis-master --namespace=e2e-tests-kubectl-dg9kk' Jan 6 17:48:15.965: INFO: stderr: "" Jan 6 17:48:15.965: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 06 Jan 17:48:14.906 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 Jan 17:48:14.906 # Server started, Redis version 3.2.12\n1:M 06 Jan 17:48:14.906 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 Jan 17:48:14.906 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jan 6 17:48:15.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-mgk2g redis-master --namespace=e2e-tests-kubectl-dg9kk --tail=1' Jan 6 17:48:16.090: INFO: stderr: "" Jan 6 17:48:16.090: INFO: stdout: "1:M 06 Jan 17:48:14.906 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jan 6 17:48:16.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-mgk2g redis-master --namespace=e2e-tests-kubectl-dg9kk --limit-bytes=1' Jan 6 17:48:16.200: INFO: stderr: "" Jan 6 17:48:16.200: INFO: stdout: " " STEP: exposing timestamps Jan 6 17:48:16.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-mgk2g redis-master --namespace=e2e-tests-kubectl-dg9kk --tail=1 --timestamps' Jan 6 17:48:16.296: INFO: stderr: "" Jan 6 17:48:16.297: INFO: stdout: "2021-01-06T17:48:14.906979736Z 1:M 06 Jan 17:48:14.906 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jan 6 17:48:18.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-mgk2g redis-master --namespace=e2e-tests-kubectl-dg9kk --since=1s' Jan 6 17:48:18.913: INFO: stderr: "" Jan 6 17:48:18.913: INFO: stdout: "" Jan 6 17:48:18.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-mgk2g redis-master --namespace=e2e-tests-kubectl-dg9kk --since=24h' Jan 6 17:48:19.030: INFO: stderr: "" Jan 6 17:48:19.030: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 06 Jan 17:48:14.906 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 Jan 17:48:14.906 # Server started, Redis version 3.2.12\n1:M 06 Jan 17:48:14.906 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 Jan 17:48:14.906 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Jan 6 17:48:19.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-dg9kk' Jan 6 17:48:19.169: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 6 17:48:19.169: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jan 6 17:48:19.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-dg9kk' Jan 6 17:48:19.285: INFO: stderr: "No resources found.\n" Jan 6 17:48:19.285: INFO: stdout: "" Jan 6 17:48:19.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-dg9kk -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 6 17:48:19.391: INFO: stderr: "" Jan 6 17:48:19.391: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:48:19.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dg9kk" for this suite. Jan 6 17:48:25.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:48:25.625: INFO: namespace: e2e-tests-kubectl-dg9kk, resource: bindings, ignored listing per whitelist Jan 6 17:48:25.695: INFO: namespace e2e-tests-kubectl-dg9kk deletion completed in 6.300228872s • [SLOW TEST:14.263 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:48:25.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-60937cdf-5047-11eb-8655-0242ac110009 STEP: Creating a pod to test consume secrets Jan 6 17:48:25.831: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-60955c46-5047-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-74js4" to be "success or failure" Jan 6 17:48:25.835: INFO: Pod "pod-projected-secrets-60955c46-5047-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.805804ms Jan 6 17:48:27.838: INFO: Pod "pod-projected-secrets-60955c46-5047-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007235882s Jan 6 17:48:29.842: INFO: Pod "pod-projected-secrets-60955c46-5047-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010747858s STEP: Saw pod success Jan 6 17:48:29.842: INFO: Pod "pod-projected-secrets-60955c46-5047-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:48:29.845: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-60955c46-5047-11eb-8655-0242ac110009 container projected-secret-volume-test: STEP: delete the pod Jan 6 17:48:29.878: INFO: Waiting for pod pod-projected-secrets-60955c46-5047-11eb-8655-0242ac110009 to disappear Jan 6 17:48:29.881: INFO: Pod pod-projected-secrets-60955c46-5047-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:48:29.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-74js4" for this suite. Jan 6 17:48:35.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:48:36.004: INFO: namespace: e2e-tests-projected-74js4, resource: bindings, ignored listing per whitelist Jan 6 17:48:36.043: INFO: namespace e2e-tests-projected-74js4 deletion completed in 6.13487453s • [SLOW TEST:10.348 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:48:36.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Jan 6 17:48:36.171: INFO: Waiting up to 5m0s for pod "var-expansion-66bf75bd-5047-11eb-8655-0242ac110009" in namespace "e2e-tests-var-expansion-f8bc6" to be "success or failure" Jan 6 17:48:36.197: INFO: Pod "var-expansion-66bf75bd-5047-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 25.811127ms Jan 6 17:48:38.201: INFO: Pod "var-expansion-66bf75bd-5047-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029512757s Jan 6 17:48:40.204: INFO: Pod "var-expansion-66bf75bd-5047-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032236623s STEP: Saw pod success Jan 6 17:48:40.204: INFO: Pod "var-expansion-66bf75bd-5047-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:48:40.205: INFO: Trying to get logs from node hunter-worker pod var-expansion-66bf75bd-5047-11eb-8655-0242ac110009 container dapi-container: STEP: delete the pod Jan 6 17:48:40.260: INFO: Waiting for pod var-expansion-66bf75bd-5047-11eb-8655-0242ac110009 to disappear Jan 6 17:48:40.278: INFO: Pod var-expansion-66bf75bd-5047-11eb-8655-0242ac110009 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:48:40.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-f8bc6" for this suite. Jan 6 17:48:46.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:48:46.394: INFO: namespace: e2e-tests-var-expansion-f8bc6, resource: bindings, ignored listing per whitelist Jan 6 17:48:46.398: INFO: namespace e2e-tests-var-expansion-f8bc6 deletion completed in 6.115342861s • [SLOW TEST:10.355 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:48:46.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 6 17:48:46.530: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 6 17:48:51.534: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 6 17:48:51.534: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 6 17:48:53.538: INFO: Creating deployment "test-rollover-deployment" Jan 6 17:48:53.559: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 6 17:48:55.565: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 6 17:48:55.572: INFO: Ensure that both replica sets have 1 created replica Jan 6 17:48:55.578: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 6 17:48:55.584: INFO: Updating deployment test-rollover-deployment Jan 6 17:48:55.584: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 6 17:48:57.593: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 6 17:48:57.600: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 6 17:48:57.606: INFO: all replica sets need to contain the pod-template-hash label Jan 6 17:48:57.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552135, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 17:48:59.613: INFO: all replica sets need to contain the pod-template-hash label Jan 6 17:48:59.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552135, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 17:49:01.614: INFO: all replica sets need to contain the pod-template-hash label Jan 6 17:49:01.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552139, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 17:49:03.614: INFO: all replica sets need to contain the pod-template-hash label Jan 6 17:49:03.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552139, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 17:49:05.615: INFO: all replica sets need to contain the pod-template-hash label Jan 6 17:49:05.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552139, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 17:49:07.615: INFO: all replica sets need to contain the pod-template-hash label Jan 6 17:49:07.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552139, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 17:49:09.612: INFO: all replica sets need to contain the pod-template-hash label Jan 6 17:49:09.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552139, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745552133, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 17:49:11.614: INFO: Jan 6 17:49:11.614: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 6 17:49:11.622: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-x5qdj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x5qdj/deployments/test-rollover-deployment,UID:711a879c-5047-11eb-8302-0242ac120002,ResourceVersion:18054380,Generation:2,CreationTimestamp:2021-01-06 17:48:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2021-01-06 17:48:53 +0000 UTC 2021-01-06 17:48:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2021-01-06 17:49:09 +0000 UTC 2021-01-06 17:48:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 6 17:49:11.625: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-x5qdj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x5qdj/replicasets/test-rollover-deployment-5b8479fdb6,UID:7252c792-5047-11eb-8302-0242ac120002,ResourceVersion:18054371,Generation:2,CreationTimestamp:2021-01-06 17:48:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 711a879c-5047-11eb-8302-0242ac120002 0xc001c57fc7 0xc001c57fc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 6 17:49:11.625: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 6 17:49:11.625: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-x5qdj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x5qdj/replicasets/test-rollover-controller,UID:6ce995a8-5047-11eb-8302-0242ac120002,ResourceVersion:18054379,Generation:2,CreationTimestamp:2021-01-06 17:48:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 711a879c-5047-11eb-8302-0242ac120002 0xc001c57e37 0xc001c57e38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 6 17:49:11.625: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-x5qdj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x5qdj/replicasets/test-rollover-deployment-58494b7559,UID:711ec935-5047-11eb-8302-0242ac120002,ResourceVersion:18054332,Generation:2,CreationTimestamp:2021-01-06 17:48:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 711a879c-5047-11eb-8302-0242ac120002 0xc001c57ef7 0xc001c57ef8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 6 17:49:11.628: INFO: Pod "test-rollover-deployment-5b8479fdb6-lkpls" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-lkpls,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-x5qdj,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x5qdj/pods/test-rollover-deployment-5b8479fdb6-lkpls,UID:7263683c-5047-11eb-8302-0242ac120002,ResourceVersion:18054349,Generation:0,CreationTimestamp:2021-01-06 17:48:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 7252c792-5047-11eb-8302-0242ac120002 0xc00245ab67 0xc00245ab68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zwpts {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwpts,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-zwpts true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00245abe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00245ac00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 17:48:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 17:48:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 17:48:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 17:48:55 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.1.47,StartTime:2021-01-06 17:48:55 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2021-01-06 17:48:59 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://957b3bee5df61a6a722c742da3e24d8d5321da37d7bda1def4551a2be1c8d79c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:49:11.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-x5qdj" for this suite. Jan 6 17:49:19.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:49:19.671: INFO: namespace: e2e-tests-deployment-x5qdj, resource: bindings, ignored listing per whitelist Jan 6 17:49:19.738: INFO: namespace e2e-tests-deployment-x5qdj deletion completed in 8.106850419s • [SLOW TEST:33.340 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:49:19.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 6 17:49:27.891: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 6 17:49:27.896: INFO: Pod pod-with-poststart-http-hook still exists Jan 6 17:49:29.896: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 6 17:49:29.901: INFO: Pod pod-with-poststart-http-hook still exists Jan 6 17:49:31.896: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 6 17:49:31.900: INFO: Pod pod-with-poststart-http-hook still exists Jan 6 17:49:33.896: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 6 17:49:33.900: INFO: Pod pod-with-poststart-http-hook still exists Jan 6 17:49:35.896: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 6 17:49:35.901: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:49:35.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-x8nt9" for this suite. Jan 6 17:49:57.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:49:57.969: INFO: namespace: e2e-tests-container-lifecycle-hook-x8nt9, resource: bindings, ignored listing per whitelist Jan 6 17:49:58.015: INFO: namespace e2e-tests-container-lifecycle-hook-x8nt9 deletion completed in 22.110526561s • [SLOW TEST:38.277 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:49:58.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 6 17:50:02.166: INFO: Waiting up to 5m0s for pod "client-envvars-99ffa8f4-5047-11eb-8655-0242ac110009" in namespace "e2e-tests-pods-h9299" to be "success or failure" Jan 6 17:50:02.214: INFO: Pod "client-envvars-99ffa8f4-5047-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 47.771632ms Jan 6 17:50:04.218: INFO: Pod "client-envvars-99ffa8f4-5047-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051922432s Jan 6 17:50:06.222: INFO: Pod "client-envvars-99ffa8f4-5047-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055479225s STEP: Saw pod success Jan 6 17:50:06.222: INFO: Pod "client-envvars-99ffa8f4-5047-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:50:06.224: INFO: Trying to get logs from node hunter-worker pod client-envvars-99ffa8f4-5047-11eb-8655-0242ac110009 container env3cont: STEP: delete the pod Jan 6 17:50:06.348: INFO: Waiting for pod client-envvars-99ffa8f4-5047-11eb-8655-0242ac110009 to disappear Jan 6 17:50:06.495: INFO: Pod client-envvars-99ffa8f4-5047-11eb-8655-0242ac110009 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:50:06.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-h9299" for this suite. Jan 6 17:50:58.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:50:58.582: INFO: namespace: e2e-tests-pods-h9299, resource: bindings, ignored listing per whitelist Jan 6 17:50:58.610: INFO: namespace e2e-tests-pods-h9299 deletion completed in 52.11115651s • [SLOW TEST:60.595 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:50:58.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 6 17:50:58.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 6 17:50:58.905: INFO: stderr: "" Jan 6 17:50:58.905: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-12-13T01:19:52Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-09-14T08:26:17Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:50:58.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bwxsd" for this suite. Jan 6 17:51:04.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:51:04.978: INFO: namespace: e2e-tests-kubectl-bwxsd, resource: bindings, ignored listing per whitelist Jan 6 17:51:05.091: INFO: namespace e2e-tests-kubectl-bwxsd deletion completed in 6.180640973s • [SLOW TEST:6.480 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:51:05.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-bf9098ea-5047-11eb-8655-0242ac110009 Jan 6 17:51:05.205: INFO: Pod name my-hostname-basic-bf9098ea-5047-11eb-8655-0242ac110009: Found 0 pods out of 1 Jan 6 17:51:10.210: INFO: Pod name my-hostname-basic-bf9098ea-5047-11eb-8655-0242ac110009: Found 1 pods out of 1 Jan 6 17:51:10.210: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-bf9098ea-5047-11eb-8655-0242ac110009" are running Jan 6 17:51:10.213: INFO: Pod "my-hostname-basic-bf9098ea-5047-11eb-8655-0242ac110009-52cl8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-06 17:51:05 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-06 17:51:09 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-06 17:51:09 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-06 17:51:05 +0000 UTC Reason: Message:}]) Jan 6 17:51:10.213: INFO: Trying to dial the pod Jan 6 17:51:15.226: INFO: Controller my-hostname-basic-bf9098ea-5047-11eb-8655-0242ac110009: Got expected result from replica 1 [my-hostname-basic-bf9098ea-5047-11eb-8655-0242ac110009-52cl8]: "my-hostname-basic-bf9098ea-5047-11eb-8655-0242ac110009-52cl8", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:51:15.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-sjvrp" for this suite. Jan 6 17:51:21.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:51:21.310: INFO: namespace: e2e-tests-replication-controller-sjvrp, resource: bindings, ignored listing per whitelist Jan 6 17:51:21.343: INFO: namespace e2e-tests-replication-controller-sjvrp deletion completed in 6.113383394s • [SLOW TEST:16.252 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:51:21.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0106 17:52:02.063420 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 6 17:52:02.063: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:52:02.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-7cvvv" for this suite. Jan 6 17:52:10.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:52:10.163: INFO: namespace: e2e-tests-gc-7cvvv, resource: bindings, ignored listing per whitelist Jan 6 17:52:10.166: INFO: namespace e2e-tests-gc-7cvvv deletion completed in 8.099653537s • [SLOW TEST:48.823 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:52:10.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 6 17:52:10.433: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-74bnn,SelfLink:/api/v1/namespaces/e2e-tests-watch-74bnn/configmaps/e2e-watch-test-label-changed,UID:e66e5c8b-5047-11eb-8302-0242ac120002,ResourceVersion:18055086,Generation:0,CreationTimestamp:2021-01-06 17:52:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 6 17:52:10.434: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-74bnn,SelfLink:/api/v1/namespaces/e2e-tests-watch-74bnn/configmaps/e2e-watch-test-label-changed,UID:e66e5c8b-5047-11eb-8302-0242ac120002,ResourceVersion:18055087,Generation:0,CreationTimestamp:2021-01-06 17:52:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 6 17:52:10.434: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-74bnn,SelfLink:/api/v1/namespaces/e2e-tests-watch-74bnn/configmaps/e2e-watch-test-label-changed,UID:e66e5c8b-5047-11eb-8302-0242ac120002,ResourceVersion:18055088,Generation:0,CreationTimestamp:2021-01-06 17:52:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 6 17:52:20.459: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-74bnn,SelfLink:/api/v1/namespaces/e2e-tests-watch-74bnn/configmaps/e2e-watch-test-label-changed,UID:e66e5c8b-5047-11eb-8302-0242ac120002,ResourceVersion:18055109,Generation:0,CreationTimestamp:2021-01-06 17:52:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 6 17:52:20.459: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-74bnn,SelfLink:/api/v1/namespaces/e2e-tests-watch-74bnn/configmaps/e2e-watch-test-label-changed,UID:e66e5c8b-5047-11eb-8302-0242ac120002,ResourceVersion:18055110,Generation:0,CreationTimestamp:2021-01-06 17:52:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 6 17:52:20.459: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-74bnn,SelfLink:/api/v1/namespaces/e2e-tests-watch-74bnn/configmaps/e2e-watch-test-label-changed,UID:e66e5c8b-5047-11eb-8302-0242ac120002,ResourceVersion:18055111,Generation:0,CreationTimestamp:2021-01-06 17:52:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:52:20.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-74bnn" for this suite. Jan 6 17:52:26.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:52:26.507: INFO: namespace: e2e-tests-watch-74bnn, resource: bindings, ignored listing per whitelist Jan 6 17:52:26.559: INFO: namespace e2e-tests-watch-74bnn deletion completed in 6.09539755s • [SLOW TEST:16.393 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:52:26.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Jan 6 17:52:26.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mj6jr' Jan 6 17:52:26.965: INFO: stderr: "" Jan 6 17:52:26.965: INFO: stdout: "pod/pause created\n" Jan 6 17:52:26.965: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 6 17:52:26.965: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-mj6jr" to be "running and ready" Jan 6 17:52:26.998: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 33.578381ms Jan 6 17:52:29.057: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091972911s Jan 6 17:52:31.061: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.095912719s Jan 6 17:52:31.061: INFO: Pod "pause" satisfied condition "running and ready" Jan 6 17:52:31.061: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Jan 6 17:52:31.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-mj6jr' Jan 6 17:52:31.173: INFO: stderr: "" Jan 6 17:52:31.173: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 6 17:52:31.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-mj6jr' Jan 6 17:52:31.266: INFO: stderr: "" Jan 6 17:52:31.266: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 6 17:52:31.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-mj6jr' Jan 6 17:52:31.373: INFO: stderr: "" Jan 6 17:52:31.373: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 6 17:52:31.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-mj6jr' Jan 6 17:52:31.510: INFO: stderr: "" Jan 6 17:52:31.510: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Jan 6 17:52:31.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mj6jr' Jan 6 17:52:31.651: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 6 17:52:31.651: INFO: stdout: "pod \"pause\" force deleted\n" Jan 6 17:52:31.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-mj6jr' Jan 6 17:52:32.091: INFO: stderr: "No resources found.\n" Jan 6 17:52:32.091: INFO: stdout: "" Jan 6 17:52:32.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-mj6jr -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 6 17:52:32.184: INFO: stderr: "" Jan 6 17:52:32.184: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:52:32.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mj6jr" for this suite. Jan 6 17:52:38.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:52:38.229: INFO: namespace: e2e-tests-kubectl-mj6jr, resource: bindings, ignored listing per whitelist Jan 6 17:52:38.310: INFO: namespace e2e-tests-kubectl-mj6jr deletion completed in 6.122888562s • [SLOW TEST:11.751 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:52:38.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 6 17:52:38.432: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 6 17:52:38.444: INFO: Number of nodes with available pods: 0 Jan 6 17:52:38.444: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 6 17:52:38.528: INFO: Number of nodes with available pods: 0 Jan 6 17:52:38.528: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:52:39.532: INFO: Number of nodes with available pods: 0 Jan 6 17:52:39.532: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:52:40.533: INFO: Number of nodes with available pods: 0 Jan 6 17:52:40.533: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:52:41.533: INFO: Number of nodes with available pods: 0 Jan 6 17:52:41.533: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:52:42.532: INFO: Number of nodes with available pods: 1 Jan 6 17:52:42.532: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 6 17:52:42.559: INFO: Number of nodes with available pods: 1 Jan 6 17:52:42.559: INFO: Number of running nodes: 0, number of available pods: 1 Jan 6 17:52:43.605: INFO: Number of nodes with available pods: 0 Jan 6 17:52:43.605: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 6 17:52:43.698: INFO: Number of nodes with available pods: 0 Jan 6 17:52:43.698: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:52:44.703: INFO: Number of nodes with available pods: 0 Jan 6 17:52:44.703: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:52:45.703: INFO: Number of nodes with available pods: 0 Jan 6 17:52:45.703: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:52:46.702: INFO: Number of nodes with available pods: 0 Jan 6 17:52:46.702: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:52:47.702: INFO: Number of nodes with available pods: 0 Jan 6 17:52:47.702: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:52:48.703: INFO: Number of nodes with available pods: 0 Jan 6 17:52:48.703: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:52:49.718: INFO: Number of nodes with available pods: 0 Jan 6 17:52:49.718: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:52:50.703: INFO: Number of nodes with available pods: 1 Jan 6 17:52:50.703: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-4j628, will wait for the garbage collector to delete the pods Jan 6 17:52:50.768: INFO: Deleting DaemonSet.extensions daemon-set took: 6.472059ms Jan 6 17:52:50.868: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.25989ms Jan 6 17:53:04.872: INFO: Number of nodes with available pods: 0 Jan 6 17:53:04.872: INFO: Number of running nodes: 0, number of available pods: 0 Jan 6 17:53:04.876: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4j628/daemonsets","resourceVersion":"18055280"},"items":null} Jan 6 17:53:04.878: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4j628/pods","resourceVersion":"18055280"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:53:04.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-4j628" for this suite. Jan 6 17:53:10.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:53:11.029: INFO: namespace: e2e-tests-daemonsets-4j628, resource: bindings, ignored listing per whitelist Jan 6 17:53:11.055: INFO: namespace e2e-tests-daemonsets-4j628 deletion completed in 6.122818118s • [SLOW TEST:32.745 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:53:11.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-0aa8cae0-5048-11eb-8655-0242ac110009 STEP: Creating a pod to test consume secrets Jan 6 17:53:11.180: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0aaa123a-5048-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-rznf5" to be "success or failure" Jan 6 17:53:11.184: INFO: Pod "pod-projected-secrets-0aaa123a-5048-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.956444ms Jan 6 17:53:13.392: INFO: Pod "pod-projected-secrets-0aaa123a-5048-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212172561s Jan 6 17:53:15.396: INFO: Pod "pod-projected-secrets-0aaa123a-5048-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.216586661s STEP: Saw pod success Jan 6 17:53:15.396: INFO: Pod "pod-projected-secrets-0aaa123a-5048-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:53:15.399: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-0aaa123a-5048-11eb-8655-0242ac110009 container projected-secret-volume-test: STEP: delete the pod Jan 6 17:53:15.418: INFO: Waiting for pod pod-projected-secrets-0aaa123a-5048-11eb-8655-0242ac110009 to disappear Jan 6 17:53:15.423: INFO: Pod pod-projected-secrets-0aaa123a-5048-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:53:15.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rznf5" for this suite. Jan 6 17:53:21.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:53:21.464: INFO: namespace: e2e-tests-projected-rznf5, resource: bindings, ignored listing per whitelist Jan 6 17:53:21.516: INFO: namespace e2e-tests-projected-rznf5 deletion completed in 6.09053657s • [SLOW TEST:10.461 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:53:21.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-10e28a48-5048-11eb-8655-0242ac110009 STEP: Creating a pod to test consume configMaps Jan 6 17:53:21.645: INFO: Waiting up to 5m0s for pod "pod-configmaps-10e58b12-5048-11eb-8655-0242ac110009" in namespace "e2e-tests-configmap-wqltq" to be "success or failure" Jan 6 17:53:21.648: INFO: Pod "pod-configmaps-10e58b12-5048-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.081229ms Jan 6 17:53:23.833: INFO: Pod "pod-configmaps-10e58b12-5048-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187891705s Jan 6 17:53:25.836: INFO: Pod "pod-configmaps-10e58b12-5048-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.191165074s STEP: Saw pod success Jan 6 17:53:25.836: INFO: Pod "pod-configmaps-10e58b12-5048-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:53:25.839: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-10e58b12-5048-11eb-8655-0242ac110009 container configmap-volume-test: STEP: delete the pod Jan 6 17:53:25.881: INFO: Waiting for pod pod-configmaps-10e58b12-5048-11eb-8655-0242ac110009 to disappear Jan 6 17:53:25.900: INFO: Pod pod-configmaps-10e58b12-5048-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:53:25.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-wqltq" for this suite. Jan 6 17:53:31.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:53:31.972: INFO: namespace: e2e-tests-configmap-wqltq, resource: bindings, ignored listing per whitelist Jan 6 17:53:32.048: INFO: namespace e2e-tests-configmap-wqltq deletion completed in 6.145966699s • [SLOW TEST:10.531 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:53:32.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:53:38.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-wjmnn" for this suite. Jan 6 17:53:44.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:53:44.531: INFO: namespace: e2e-tests-namespaces-wjmnn, resource: bindings, ignored listing per whitelist Jan 6 17:53:44.571: INFO: namespace e2e-tests-namespaces-wjmnn deletion completed in 6.111211566s STEP: Destroying namespace "e2e-tests-nsdeletetest-6zmwt" for this suite. Jan 6 17:53:44.574: INFO: Namespace e2e-tests-nsdeletetest-6zmwt was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-xkqx7" for this suite. Jan 6 17:53:50.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:53:50.684: INFO: namespace: e2e-tests-nsdeletetest-xkqx7, resource: bindings, ignored listing per whitelist Jan 6 17:53:50.713: INFO: namespace e2e-tests-nsdeletetest-xkqx7 deletion completed in 6.139613469s • [SLOW TEST:18.665 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:53:50.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Jan 6 17:53:50.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-49t9b run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 6 17:53:55.063: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0106 17:53:54.969698 1753 log.go:172] (0xc00014c9a0) (0xc0005d1680) Create stream\nI0106 17:53:54.969770 1753 log.go:172] (0xc00014c9a0) (0xc0005d1680) Stream added, broadcasting: 1\nI0106 17:53:54.972799 1753 log.go:172] (0xc00014c9a0) Reply frame received for 1\nI0106 17:53:54.973003 1753 log.go:172] (0xc00014c9a0) (0xc0001c2000) Create stream\nI0106 17:53:54.973039 1753 log.go:172] (0xc00014c9a0) (0xc0001c2000) Stream added, broadcasting: 3\nI0106 17:53:54.973924 1753 log.go:172] (0xc00014c9a0) Reply frame received for 3\nI0106 17:53:54.973968 1753 log.go:172] (0xc00014c9a0) (0xc0007ec280) Create stream\nI0106 17:53:54.973982 1753 log.go:172] (0xc00014c9a0) (0xc0007ec280) Stream added, broadcasting: 5\nI0106 17:53:54.975214 1753 log.go:172] (0xc00014c9a0) Reply frame received for 5\nI0106 17:53:54.975260 1753 log.go:172] (0xc00014c9a0) (0xc0001c20a0) Create stream\nI0106 17:53:54.975274 1753 log.go:172] (0xc00014c9a0) (0xc0001c20a0) Stream added, broadcasting: 7\nI0106 17:53:54.976448 1753 log.go:172] (0xc00014c9a0) Reply frame received for 7\nI0106 17:53:54.976658 1753 log.go:172] (0xc0001c2000) (3) Writing data frame\nI0106 17:53:54.976792 1753 log.go:172] (0xc0001c2000) (3) Writing data frame\nI0106 17:53:54.977921 1753 log.go:172] (0xc00014c9a0) Data frame received for 5\nI0106 17:53:54.977939 1753 log.go:172] (0xc0007ec280) (5) Data frame handling\nI0106 17:53:54.977954 1753 log.go:172] (0xc0007ec280) (5) Data frame sent\nI0106 17:53:54.978638 1753 log.go:172] (0xc00014c9a0) Data frame received for 5\nI0106 17:53:54.978661 1753 log.go:172] (0xc0007ec280) (5) Data frame handling\nI0106 17:53:54.978684 1753 log.go:172] (0xc0007ec280) (5) Data frame sent\nI0106 17:53:55.011480 1753 log.go:172] (0xc00014c9a0) Data frame received for 7\nI0106 17:53:55.011544 1753 log.go:172] (0xc0001c20a0) (7) Data frame handling\nI0106 17:53:55.011584 1753 log.go:172] (0xc00014c9a0) Data frame received for 5\nI0106 17:53:55.011619 1753 log.go:172] (0xc0007ec280) (5) Data frame handling\nI0106 17:53:55.011752 1753 log.go:172] (0xc00014c9a0) Data frame received for 1\nI0106 17:53:55.011776 1753 log.go:172] (0xc0005d1680) (1) Data frame handling\nI0106 17:53:55.011797 1753 log.go:172] (0xc0005d1680) (1) Data frame sent\nI0106 17:53:55.011958 1753 log.go:172] (0xc00014c9a0) (0xc0005d1680) Stream removed, broadcasting: 1\nI0106 17:53:55.012126 1753 log.go:172] (0xc00014c9a0) (0xc0005d1680) Stream removed, broadcasting: 1\nI0106 17:53:55.012205 1753 log.go:172] (0xc00014c9a0) (0xc0001c2000) Stream removed, broadcasting: 3\nI0106 17:53:55.012276 1753 log.go:172] (0xc00014c9a0) Go away received\nI0106 17:53:55.012310 1753 log.go:172] (0xc00014c9a0) (0xc0007ec280) Stream removed, broadcasting: 5\nI0106 17:53:55.012333 1753 log.go:172] (0xc00014c9a0) (0xc0001c20a0) Stream removed, broadcasting: 7\n" Jan 6 17:53:55.063: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:53:57.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-49t9b" for this suite. Jan 6 17:54:03.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:54:03.180: INFO: namespace: e2e-tests-kubectl-49t9b, resource: bindings, ignored listing per whitelist Jan 6 17:54:03.241: INFO: namespace e2e-tests-kubectl-49t9b deletion completed in 6.167096728s • [SLOW TEST:12.527 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:54:03.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 6 17:54:03.357: INFO: Waiting up to 5m0s for pod "downward-api-29c3e6fe-5048-11eb-8655-0242ac110009" in namespace "e2e-tests-downward-api-wzgrc" to be "success or failure" Jan 6 17:54:03.365: INFO: Pod "downward-api-29c3e6fe-5048-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 7.118504ms Jan 6 17:54:05.369: INFO: Pod "downward-api-29c3e6fe-5048-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01150144s Jan 6 17:54:07.373: INFO: Pod "downward-api-29c3e6fe-5048-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015693505s STEP: Saw pod success Jan 6 17:54:07.373: INFO: Pod "downward-api-29c3e6fe-5048-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:54:07.376: INFO: Trying to get logs from node hunter-worker pod downward-api-29c3e6fe-5048-11eb-8655-0242ac110009 container dapi-container: STEP: delete the pod Jan 6 17:54:07.409: INFO: Waiting for pod downward-api-29c3e6fe-5048-11eb-8655-0242ac110009 to disappear Jan 6 17:54:07.425: INFO: Pod downward-api-29c3e6fe-5048-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:54:07.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wzgrc" for this suite. Jan 6 17:54:13.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:54:13.535: INFO: namespace: e2e-tests-downward-api-wzgrc, resource: bindings, ignored listing per whitelist Jan 6 17:54:13.555: INFO: namespace e2e-tests-downward-api-wzgrc deletion completed in 6.126003909s • [SLOW TEST:10.314 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:54:13.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 6 17:54:13.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-jvzxw' Jan 6 17:54:13.848: INFO: stderr: "" Jan 6 17:54:13.848: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Jan 6 17:54:13.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-jvzxw' Jan 6 17:54:18.011: INFO: stderr: "" Jan 6 17:54:18.011: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:54:18.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jvzxw" for this suite. Jan 6 17:54:24.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:54:24.063: INFO: namespace: e2e-tests-kubectl-jvzxw, resource: bindings, ignored listing per whitelist Jan 6 17:54:24.127: INFO: namespace e2e-tests-kubectl-jvzxw deletion completed in 6.104898836s • [SLOW TEST:10.573 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:54:24.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 6 17:54:34.312: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jp5xg PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 6 17:54:34.312: INFO: >>> kubeConfig: /root/.kube/config I0106 17:54:34.342384 6 log.go:172] (0xc000ad5550) (0xc0019a2be0) Create stream I0106 17:54:34.342412 6 log.go:172] (0xc000ad5550) (0xc0019a2be0) Stream added, broadcasting: 1 I0106 17:54:34.344791 6 log.go:172] (0xc000ad5550) Reply frame received for 1 I0106 17:54:34.344819 6 log.go:172] (0xc000ad5550) (0xc0019a2c80) Create stream I0106 17:54:34.344898 6 log.go:172] (0xc000ad5550) (0xc0019a2c80) Stream added, broadcasting: 3 I0106 17:54:34.345779 6 log.go:172] (0xc000ad5550) Reply frame received for 3 I0106 17:54:34.345808 6 log.go:172] (0xc000ad5550) (0xc0019a2d20) Create stream I0106 17:54:34.345818 6 log.go:172] (0xc000ad5550) (0xc0019a2d20) Stream added, broadcasting: 5 I0106 17:54:34.346511 6 log.go:172] (0xc000ad5550) Reply frame received for 5 I0106 17:54:34.405805 6 log.go:172] (0xc000ad5550) Data frame received for 5 I0106 17:54:34.405836 6 log.go:172] (0xc0019a2d20) (5) Data frame handling I0106 17:54:34.405887 6 log.go:172] (0xc000ad5550) Data frame received for 3 I0106 17:54:34.405936 6 log.go:172] (0xc0019a2c80) (3) Data frame handling I0106 17:54:34.405976 6 log.go:172] (0xc0019a2c80) (3) Data frame sent I0106 17:54:34.405999 6 log.go:172] (0xc000ad5550) Data frame received for 3 I0106 17:54:34.406017 6 log.go:172] (0xc0019a2c80) (3) Data frame handling I0106 17:54:34.407655 6 log.go:172] (0xc000ad5550) Data frame received for 1 I0106 17:54:34.407683 6 log.go:172] (0xc0019a2be0) (1) Data frame handling I0106 17:54:34.407697 6 log.go:172] (0xc0019a2be0) (1) Data frame sent I0106 17:54:34.407712 6 log.go:172] (0xc000ad5550) (0xc0019a2be0) Stream removed, broadcasting: 1 I0106 17:54:34.407777 6 log.go:172] (0xc000ad5550) Go away received I0106 17:54:34.407819 6 log.go:172] (0xc000ad5550) (0xc0019a2be0) Stream removed, broadcasting: 1 I0106 17:54:34.407855 6 log.go:172] (0xc000ad5550) (0xc0019a2c80) Stream removed, broadcasting: 3 I0106 17:54:34.407876 6 log.go:172] (0xc000ad5550) (0xc0019a2d20) Stream removed, broadcasting: 5 Jan 6 17:54:34.407: INFO: Exec stderr: "" Jan 6 17:54:34.407: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jp5xg PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 6 17:54:34.407: INFO: >>> kubeConfig: /root/.kube/config I0106 17:54:34.441049 6 log.go:172] (0xc001a444d0) (0xc001506fa0) Create stream I0106 17:54:34.441076 6 log.go:172] (0xc001a444d0) (0xc001506fa0) Stream added, broadcasting: 1 I0106 17:54:34.443329 6 log.go:172] (0xc001a444d0) Reply frame received for 1 I0106 17:54:34.443373 6 log.go:172] (0xc001a444d0) (0xc001507040) Create stream I0106 17:54:34.443399 6 log.go:172] (0xc001a444d0) (0xc001507040) Stream added, broadcasting: 3 I0106 17:54:34.444269 6 log.go:172] (0xc001a444d0) Reply frame received for 3 I0106 17:54:34.444287 6 log.go:172] (0xc001a444d0) (0xc000daf9a0) Create stream I0106 17:54:34.444293 6 log.go:172] (0xc001a444d0) (0xc000daf9a0) Stream added, broadcasting: 5 I0106 17:54:34.445497 6 log.go:172] (0xc001a444d0) Reply frame received for 5 I0106 17:54:34.506052 6 log.go:172] (0xc001a444d0) Data frame received for 3 I0106 17:54:34.506111 6 log.go:172] (0xc001507040) (3) Data frame handling I0106 17:54:34.506132 6 log.go:172] (0xc001507040) (3) Data frame sent I0106 17:54:34.506146 6 log.go:172] (0xc001a444d0) Data frame received for 3 I0106 17:54:34.506158 6 log.go:172] (0xc001507040) (3) Data frame handling I0106 17:54:34.506201 6 log.go:172] (0xc001a444d0) Data frame received for 5 I0106 17:54:34.506240 6 log.go:172] (0xc000daf9a0) (5) Data frame handling I0106 17:54:34.507498 6 log.go:172] (0xc001a444d0) Data frame received for 1 I0106 17:54:34.507525 6 log.go:172] (0xc001506fa0) (1) Data frame handling I0106 17:54:34.507560 6 log.go:172] (0xc001506fa0) (1) Data frame sent I0106 17:54:34.507829 6 log.go:172] (0xc001a444d0) (0xc001506fa0) Stream removed, broadcasting: 1 I0106 17:54:34.507873 6 log.go:172] (0xc001a444d0) Go away received I0106 17:54:34.507983 6 log.go:172] (0xc001a444d0) (0xc001506fa0) Stream removed, broadcasting: 1 I0106 17:54:34.508013 6 log.go:172] (0xc001a444d0) (0xc001507040) Stream removed, broadcasting: 3 I0106 17:54:34.508022 6 log.go:172] (0xc001a444d0) (0xc000daf9a0) Stream removed, broadcasting: 5 Jan 6 17:54:34.508: INFO: Exec stderr: "" Jan 6 17:54:34.508: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jp5xg PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 6 17:54:34.508: INFO: >>> kubeConfig: /root/.kube/config I0106 17:54:34.536270 6 log.go:172] (0xc001a449a0) (0xc001507400) Create stream I0106 17:54:34.536287 6 log.go:172] (0xc001a449a0) (0xc001507400) Stream added, broadcasting: 1 I0106 17:54:34.541404 6 log.go:172] (0xc001a449a0) Reply frame received for 1 I0106 17:54:34.541446 6 log.go:172] (0xc001a449a0) (0xc001fc08c0) Create stream I0106 17:54:34.541462 6 log.go:172] (0xc001a449a0) (0xc001fc08c0) Stream added, broadcasting: 3 I0106 17:54:34.543149 6 log.go:172] (0xc001a449a0) Reply frame received for 3 I0106 17:54:34.543170 6 log.go:172] (0xc001a449a0) (0xc0015074a0) Create stream I0106 17:54:34.543180 6 log.go:172] (0xc001a449a0) (0xc0015074a0) Stream added, broadcasting: 5 I0106 17:54:34.544957 6 log.go:172] (0xc001a449a0) Reply frame received for 5 I0106 17:54:34.608521 6 log.go:172] (0xc001a449a0) Data frame received for 3 I0106 17:54:34.608555 6 log.go:172] (0xc001fc08c0) (3) Data frame handling I0106 17:54:34.608569 6 log.go:172] (0xc001fc08c0) (3) Data frame sent I0106 17:54:34.608575 6 log.go:172] (0xc001a449a0) Data frame received for 3 I0106 17:54:34.608587 6 log.go:172] (0xc001fc08c0) (3) Data frame handling I0106 17:54:34.609031 6 log.go:172] (0xc001a449a0) Data frame received for 5 I0106 17:54:34.609060 6 log.go:172] (0xc0015074a0) (5) Data frame handling I0106 17:54:34.611755 6 log.go:172] (0xc001a449a0) Data frame received for 1 I0106 17:54:34.611779 6 log.go:172] (0xc001507400) (1) Data frame handling I0106 17:54:34.611796 6 log.go:172] (0xc001507400) (1) Data frame sent I0106 17:54:34.611806 6 log.go:172] (0xc001a449a0) (0xc001507400) Stream removed, broadcasting: 1 I0106 17:54:34.611870 6 log.go:172] (0xc001a449a0) (0xc001507400) Stream removed, broadcasting: 1 I0106 17:54:34.611881 6 log.go:172] (0xc001a449a0) (0xc001fc08c0) Stream removed, broadcasting: 3 I0106 17:54:34.611888 6 log.go:172] (0xc001a449a0) (0xc0015074a0) Stream removed, broadcasting: 5 Jan 6 17:54:34.611: INFO: Exec stderr: "" Jan 6 17:54:34.611: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jp5xg PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 6 17:54:34.611: INFO: >>> kubeConfig: /root/.kube/config I0106 17:54:34.611987 6 log.go:172] (0xc001a449a0) Go away received I0106 17:54:34.636401 6 log.go:172] (0xc0023162c0) (0xc001766280) Create stream I0106 17:54:34.636441 6 log.go:172] (0xc0023162c0) (0xc001766280) Stream added, broadcasting: 1 I0106 17:54:34.647787 6 log.go:172] (0xc0023162c0) Reply frame received for 1 I0106 17:54:34.647834 6 log.go:172] (0xc0023162c0) (0xc001fc0a00) Create stream I0106 17:54:34.647843 6 log.go:172] (0xc0023162c0) (0xc001fc0a00) Stream added, broadcasting: 3 I0106 17:54:34.648667 6 log.go:172] (0xc0023162c0) Reply frame received for 3 I0106 17:54:34.648693 6 log.go:172] (0xc0023162c0) (0xc001507540) Create stream I0106 17:54:34.648701 6 log.go:172] (0xc0023162c0) (0xc001507540) Stream added, broadcasting: 5 I0106 17:54:34.649420 6 log.go:172] (0xc0023162c0) Reply frame received for 5 I0106 17:54:34.705431 6 log.go:172] (0xc0023162c0) Data frame received for 5 I0106 17:54:34.705473 6 log.go:172] (0xc0023162c0) Data frame received for 3 I0106 17:54:34.705528 6 log.go:172] (0xc001fc0a00) (3) Data frame handling I0106 17:54:34.705560 6 log.go:172] (0xc001fc0a00) (3) Data frame sent I0106 17:54:34.705577 6 log.go:172] (0xc0023162c0) Data frame received for 3 I0106 17:54:34.705600 6 log.go:172] (0xc001fc0a00) (3) Data frame handling I0106 17:54:34.705630 6 log.go:172] (0xc001507540) (5) Data frame handling I0106 17:54:34.706935 6 log.go:172] (0xc0023162c0) Data frame received for 1 I0106 17:54:34.706975 6 log.go:172] (0xc001766280) (1) Data frame handling I0106 17:54:34.707014 6 log.go:172] (0xc001766280) (1) Data frame sent I0106 17:54:34.707037 6 log.go:172] (0xc0023162c0) (0xc001766280) Stream removed, broadcasting: 1 I0106 17:54:34.707060 6 log.go:172] (0xc0023162c0) Go away received I0106 17:54:34.707177 6 log.go:172] (0xc0023162c0) (0xc001766280) Stream removed, broadcasting: 1 I0106 17:54:34.707193 6 log.go:172] (0xc0023162c0) (0xc001fc0a00) Stream removed, broadcasting: 3 I0106 17:54:34.707201 6 log.go:172] (0xc0023162c0) (0xc001507540) Stream removed, broadcasting: 5 Jan 6 17:54:34.707: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 6 17:54:34.707: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jp5xg PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 6 17:54:34.707: INFO: >>> kubeConfig: /root/.kube/config I0106 17:54:34.746873 6 log.go:172] (0xc001a44e70) (0xc001507860) Create stream I0106 17:54:34.746928 6 log.go:172] (0xc001a44e70) (0xc001507860) Stream added, broadcasting: 1 I0106 17:54:34.748803 6 log.go:172] (0xc001a44e70) Reply frame received for 1 I0106 17:54:34.748941 6 log.go:172] (0xc001a44e70) (0xc001766320) Create stream I0106 17:54:34.748978 6 log.go:172] (0xc001a44e70) (0xc001766320) Stream added, broadcasting: 3 I0106 17:54:34.750322 6 log.go:172] (0xc001a44e70) Reply frame received for 3 I0106 17:54:34.750349 6 log.go:172] (0xc001a44e70) (0xc001fc0aa0) Create stream I0106 17:54:34.750359 6 log.go:172] (0xc001a44e70) (0xc001fc0aa0) Stream added, broadcasting: 5 I0106 17:54:34.751171 6 log.go:172] (0xc001a44e70) Reply frame received for 5 I0106 17:54:34.812946 6 log.go:172] (0xc001a44e70) Data frame received for 5 I0106 17:54:34.812987 6 log.go:172] (0xc001fc0aa0) (5) Data frame handling I0106 17:54:34.813010 6 log.go:172] (0xc001a44e70) Data frame received for 3 I0106 17:54:34.813026 6 log.go:172] (0xc001766320) (3) Data frame handling I0106 17:54:34.813045 6 log.go:172] (0xc001766320) (3) Data frame sent I0106 17:54:34.813059 6 log.go:172] (0xc001a44e70) Data frame received for 3 I0106 17:54:34.813069 6 log.go:172] (0xc001766320) (3) Data frame handling I0106 17:54:34.814186 6 log.go:172] (0xc001a44e70) Data frame received for 1 I0106 17:54:34.814208 6 log.go:172] (0xc001507860) (1) Data frame handling I0106 17:54:34.814226 6 log.go:172] (0xc001507860) (1) Data frame sent I0106 17:54:34.814242 6 log.go:172] (0xc001a44e70) (0xc001507860) Stream removed, broadcasting: 1 I0106 17:54:34.814261 6 log.go:172] (0xc001a44e70) Go away received I0106 17:54:34.814355 6 log.go:172] (0xc001a44e70) (0xc001507860) Stream removed, broadcasting: 1 I0106 17:54:34.814373 6 log.go:172] (0xc001a44e70) (0xc001766320) Stream removed, broadcasting: 3 I0106 17:54:34.814386 6 log.go:172] (0xc001a44e70) (0xc001fc0aa0) Stream removed, broadcasting: 5 Jan 6 17:54:34.814: INFO: Exec stderr: "" Jan 6 17:54:34.814: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jp5xg PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 6 17:54:34.814: INFO: >>> kubeConfig: /root/.kube/config I0106 17:54:34.845843 6 log.go:172] (0xc0019162c0) (0xc001fc0e60) Create stream I0106 17:54:34.845878 6 log.go:172] (0xc0019162c0) (0xc001fc0e60) Stream added, broadcasting: 1 I0106 17:54:34.848216 6 log.go:172] (0xc0019162c0) Reply frame received for 1 I0106 17:54:34.848249 6 log.go:172] (0xc0019162c0) (0xc0019a2dc0) Create stream I0106 17:54:34.848259 6 log.go:172] (0xc0019162c0) (0xc0019a2dc0) Stream added, broadcasting: 3 I0106 17:54:34.849341 6 log.go:172] (0xc0019162c0) Reply frame received for 3 I0106 17:54:34.849379 6 log.go:172] (0xc0019162c0) (0xc0017663c0) Create stream I0106 17:54:34.849399 6 log.go:172] (0xc0019162c0) (0xc0017663c0) Stream added, broadcasting: 5 I0106 17:54:34.850283 6 log.go:172] (0xc0019162c0) Reply frame received for 5 I0106 17:54:34.911107 6 log.go:172] (0xc0019162c0) Data frame received for 5 I0106 17:54:34.911141 6 log.go:172] (0xc0017663c0) (5) Data frame handling I0106 17:54:34.911162 6 log.go:172] (0xc0019162c0) Data frame received for 3 I0106 17:54:34.911171 6 log.go:172] (0xc0019a2dc0) (3) Data frame handling I0106 17:54:34.911184 6 log.go:172] (0xc0019a2dc0) (3) Data frame sent I0106 17:54:34.911192 6 log.go:172] (0xc0019162c0) Data frame received for 3 I0106 17:54:34.911200 6 log.go:172] (0xc0019a2dc0) (3) Data frame handling I0106 17:54:34.912274 6 log.go:172] (0xc0019162c0) Data frame received for 1 I0106 17:54:34.912291 6 log.go:172] (0xc001fc0e60) (1) Data frame handling I0106 17:54:34.912304 6 log.go:172] (0xc001fc0e60) (1) Data frame sent I0106 17:54:34.912326 6 log.go:172] (0xc0019162c0) (0xc001fc0e60) Stream removed, broadcasting: 1 I0106 17:54:34.912357 6 log.go:172] (0xc0019162c0) Go away received I0106 17:54:34.912406 6 log.go:172] (0xc0019162c0) (0xc001fc0e60) Stream removed, broadcasting: 1 I0106 17:54:34.912420 6 log.go:172] (0xc0019162c0) (0xc0019a2dc0) Stream removed, broadcasting: 3 I0106 17:54:34.912430 6 log.go:172] (0xc0019162c0) (0xc0017663c0) Stream removed, broadcasting: 5 Jan 6 17:54:34.912: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 6 17:54:34.912: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jp5xg PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 6 17:54:34.912: INFO: >>> kubeConfig: /root/.kube/config I0106 17:54:34.938251 6 log.go:172] (0xc000ad5a20) (0xc0019a30e0) Create stream I0106 17:54:34.938284 6 log.go:172] (0xc000ad5a20) (0xc0019a30e0) Stream added, broadcasting: 1 I0106 17:54:34.940991 6 log.go:172] (0xc000ad5a20) Reply frame received for 1 I0106 17:54:34.941027 6 log.go:172] (0xc000ad5a20) (0xc0024e8f00) Create stream I0106 17:54:34.941056 6 log.go:172] (0xc000ad5a20) (0xc0024e8f00) Stream added, broadcasting: 3 I0106 17:54:34.942250 6 log.go:172] (0xc000ad5a20) Reply frame received for 3 I0106 17:54:34.942283 6 log.go:172] (0xc000ad5a20) (0xc0024e8fa0) Create stream I0106 17:54:34.942295 6 log.go:172] (0xc000ad5a20) (0xc0024e8fa0) Stream added, broadcasting: 5 I0106 17:54:34.944455 6 log.go:172] (0xc000ad5a20) Reply frame received for 5 I0106 17:54:35.010161 6 log.go:172] (0xc000ad5a20) Data frame received for 5 I0106 17:54:35.010219 6 log.go:172] (0xc0024e8fa0) (5) Data frame handling I0106 17:54:35.010255 6 log.go:172] (0xc000ad5a20) Data frame received for 3 I0106 17:54:35.010264 6 log.go:172] (0xc0024e8f00) (3) Data frame handling I0106 17:54:35.010271 6 log.go:172] (0xc0024e8f00) (3) Data frame sent I0106 17:54:35.010280 6 log.go:172] (0xc000ad5a20) Data frame received for 3 I0106 17:54:35.010284 6 log.go:172] (0xc0024e8f00) (3) Data frame handling I0106 17:54:35.011499 6 log.go:172] (0xc000ad5a20) Data frame received for 1 I0106 17:54:35.011533 6 log.go:172] (0xc0019a30e0) (1) Data frame handling I0106 17:54:35.011557 6 log.go:172] (0xc0019a30e0) (1) Data frame sent I0106 17:54:35.011579 6 log.go:172] (0xc000ad5a20) (0xc0019a30e0) Stream removed, broadcasting: 1 I0106 17:54:35.011599 6 log.go:172] (0xc000ad5a20) Go away received I0106 17:54:35.011720 6 log.go:172] (0xc000ad5a20) (0xc0019a30e0) Stream removed, broadcasting: 1 I0106 17:54:35.011748 6 log.go:172] (0xc000ad5a20) (0xc0024e8f00) Stream removed, broadcasting: 3 I0106 17:54:35.011763 6 log.go:172] (0xc000ad5a20) (0xc0024e8fa0) Stream removed, broadcasting: 5 Jan 6 17:54:35.011: INFO: Exec stderr: "" Jan 6 17:54:35.011: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jp5xg PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 6 17:54:35.011: INFO: >>> kubeConfig: /root/.kube/config I0106 17:54:35.039997 6 log.go:172] (0xc000ad51e0) (0xc000b28280) Create stream I0106 17:54:35.040038 6 log.go:172] (0xc000ad51e0) (0xc000b28280) Stream added, broadcasting: 1 I0106 17:54:35.042185 6 log.go:172] (0xc000ad51e0) Reply frame received for 1 I0106 17:54:35.042236 6 log.go:172] (0xc000ad51e0) (0xc00049ed20) Create stream I0106 17:54:35.042262 6 log.go:172] (0xc000ad51e0) (0xc00049ed20) Stream added, broadcasting: 3 I0106 17:54:35.042987 6 log.go:172] (0xc000ad51e0) Reply frame received for 3 I0106 17:54:35.043033 6 log.go:172] (0xc000ad51e0) (0xc0000fc280) Create stream I0106 17:54:35.043054 6 log.go:172] (0xc000ad51e0) (0xc0000fc280) Stream added, broadcasting: 5 I0106 17:54:35.043765 6 log.go:172] (0xc000ad51e0) Reply frame received for 5 I0106 17:54:35.109619 6 log.go:172] (0xc000ad51e0) Data frame received for 3 I0106 17:54:35.109795 6 log.go:172] (0xc00049ed20) (3) Data frame handling I0106 17:54:35.109838 6 log.go:172] (0xc00049ed20) (3) Data frame sent I0106 17:54:35.109855 6 log.go:172] (0xc000ad51e0) Data frame received for 3 I0106 17:54:35.109888 6 log.go:172] (0xc000ad51e0) Data frame received for 5 I0106 17:54:35.109935 6 log.go:172] (0xc0000fc280) (5) Data frame handling I0106 17:54:35.109966 6 log.go:172] (0xc00049ed20) (3) Data frame handling I0106 17:54:35.111332 6 log.go:172] (0xc000ad51e0) Data frame received for 1 I0106 17:54:35.111373 6 log.go:172] (0xc000b28280) (1) Data frame handling I0106 17:54:35.111394 6 log.go:172] (0xc000b28280) (1) Data frame sent I0106 17:54:35.111417 6 log.go:172] (0xc000ad51e0) (0xc000b28280) Stream removed, broadcasting: 1 I0106 17:54:35.111452 6 log.go:172] (0xc000ad51e0) Go away received I0106 17:54:35.111556 6 log.go:172] (0xc000ad51e0) (0xc000b28280) Stream removed, broadcasting: 1 I0106 17:54:35.111588 6 log.go:172] (0xc000ad51e0) (0xc00049ed20) Stream removed, broadcasting: 3 I0106 17:54:35.111607 6 log.go:172] (0xc000ad51e0) (0xc0000fc280) Stream removed, broadcasting: 5 Jan 6 17:54:35.111: INFO: Exec stderr: "" Jan 6 17:54:35.111: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jp5xg PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 6 17:54:35.111: INFO: >>> kubeConfig: /root/.kube/config I0106 17:54:35.141662 6 log.go:172] (0xc000ad5810) (0xc000b285a0) Create stream I0106 17:54:35.141690 6 log.go:172] (0xc000ad5810) (0xc000b285a0) Stream added, broadcasting: 1 I0106 17:54:35.142910 6 log.go:172] (0xc000ad5810) Reply frame received for 1 I0106 17:54:35.142935 6 log.go:172] (0xc000ad5810) (0xc00049efa0) Create stream I0106 17:54:35.142944 6 log.go:172] (0xc000ad5810) (0xc00049efa0) Stream added, broadcasting: 3 I0106 17:54:35.143669 6 log.go:172] (0xc000ad5810) Reply frame received for 3 I0106 17:54:35.143689 6 log.go:172] (0xc000ad5810) (0xc000194c80) Create stream I0106 17:54:35.143697 6 log.go:172] (0xc000ad5810) (0xc000194c80) Stream added, broadcasting: 5 I0106 17:54:35.144328 6 log.go:172] (0xc000ad5810) Reply frame received for 5 I0106 17:54:35.218975 6 log.go:172] (0xc000ad5810) Data frame received for 5 I0106 17:54:35.218998 6 log.go:172] (0xc000194c80) (5) Data frame handling I0106 17:54:35.219040 6 log.go:172] (0xc000ad5810) Data frame received for 3 I0106 17:54:35.219079 6 log.go:172] (0xc00049efa0) (3) Data frame handling I0106 17:54:35.219100 6 log.go:172] (0xc00049efa0) (3) Data frame sent I0106 17:54:35.219116 6 log.go:172] (0xc000ad5810) Data frame received for 3 I0106 17:54:35.219129 6 log.go:172] (0xc00049efa0) (3) Data frame handling I0106 17:54:35.220400 6 log.go:172] (0xc000ad5810) Data frame received for 1 I0106 17:54:35.220432 6 log.go:172] (0xc000b285a0) (1) Data frame handling I0106 17:54:35.220449 6 log.go:172] (0xc000b285a0) (1) Data frame sent I0106 17:54:35.220464 6 log.go:172] (0xc000ad5810) (0xc000b285a0) Stream removed, broadcasting: 1 I0106 17:54:35.220491 6 log.go:172] (0xc000ad5810) Go away received I0106 17:54:35.220594 6 log.go:172] (0xc000ad5810) (0xc000b285a0) Stream removed, broadcasting: 1 I0106 17:54:35.220616 6 log.go:172] (0xc000ad5810) (0xc00049efa0) Stream removed, broadcasting: 3 I0106 17:54:35.220630 6 log.go:172] (0xc000ad5810) (0xc000194c80) Stream removed, broadcasting: 5 Jan 6 17:54:35.220: INFO: Exec stderr: "" Jan 6 17:54:35.220: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jp5xg PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 6 17:54:35.220: INFO: >>> kubeConfig: /root/.kube/config I0106 17:54:35.250016 6 log.go:172] (0xc001a44370) (0xc000336fa0) Create stream I0106 17:54:35.250048 6 log.go:172] (0xc001a44370) (0xc000336fa0) Stream added, broadcasting: 1 I0106 17:54:35.251915 6 log.go:172] (0xc001a44370) Reply frame received for 1 I0106 17:54:35.251963 6 log.go:172] (0xc001a44370) (0xc000372000) Create stream I0106 17:54:35.251977 6 log.go:172] (0xc001a44370) (0xc000372000) Stream added, broadcasting: 3 I0106 17:54:35.252880 6 log.go:172] (0xc001a44370) Reply frame received for 3 I0106 17:54:35.252963 6 log.go:172] (0xc001a44370) (0xc000194fa0) Create stream I0106 17:54:35.252977 6 log.go:172] (0xc001a44370) (0xc000194fa0) Stream added, broadcasting: 5 I0106 17:54:35.254070 6 log.go:172] (0xc001a44370) Reply frame received for 5 I0106 17:54:35.325741 6 log.go:172] (0xc001a44370) Data frame received for 5 I0106 17:54:35.325777 6 log.go:172] (0xc000194fa0) (5) Data frame handling I0106 17:54:35.325817 6 log.go:172] (0xc001a44370) Data frame received for 3 I0106 17:54:35.325830 6 log.go:172] (0xc000372000) (3) Data frame handling I0106 17:54:35.325846 6 log.go:172] (0xc000372000) (3) Data frame sent I0106 17:54:35.325859 6 log.go:172] (0xc001a44370) Data frame received for 3 I0106 17:54:35.325871 6 log.go:172] (0xc000372000) (3) Data frame handling I0106 17:54:35.327473 6 log.go:172] (0xc001a44370) Data frame received for 1 I0106 17:54:35.327519 6 log.go:172] (0xc000336fa0) (1) Data frame handling I0106 17:54:35.327536 6 log.go:172] (0xc000336fa0) (1) Data frame sent I0106 17:54:35.327566 6 log.go:172] (0xc001a44370) (0xc000336fa0) Stream removed, broadcasting: 1 I0106 17:54:35.327593 6 log.go:172] (0xc001a44370) Go away received I0106 17:54:35.327761 6 log.go:172] (0xc001a44370) (0xc000336fa0) Stream removed, broadcasting: 1 I0106 17:54:35.327784 6 log.go:172] (0xc001a44370) (0xc000372000) Stream removed, broadcasting: 3 I0106 17:54:35.327794 6 log.go:172] (0xc001a44370) (0xc000194fa0) Stream removed, broadcasting: 5 Jan 6 17:54:35.327: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:54:35.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-jp5xg" for this suite. Jan 6 17:55:27.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:55:27.383: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-jp5xg, resource: bindings, ignored listing per whitelist Jan 6 17:55:27.454: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-jp5xg deletion completed in 52.123082562s • [SLOW TEST:63.327 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:55:27.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:55:31.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-6tq5w" for this suite. Jan 6 17:56:09.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:56:09.710: INFO: namespace: e2e-tests-kubelet-test-6tq5w, resource: bindings, ignored listing per whitelist Jan 6 17:56:09.714: INFO: namespace e2e-tests-kubelet-test-6tq5w deletion completed in 38.098588864s • [SLOW TEST:42.259 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:56:09.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:56:09.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-r9qvf" for this suite. Jan 6 17:56:15.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:56:15.955: INFO: namespace: e2e-tests-services-r9qvf, resource: bindings, ignored listing per whitelist Jan 6 17:56:16.020: INFO: namespace e2e-tests-services-r9qvf deletion completed in 6.134773892s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.306 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:56:16.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 6 17:56:16.109: INFO: Waiting up to 5m0s for pod "downward-api-78e45b9e-5048-11eb-8655-0242ac110009" in namespace "e2e-tests-downward-api-6qlxw" to be "success or failure" Jan 6 17:56:16.122: INFO: Pod "downward-api-78e45b9e-5048-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 13.663657ms Jan 6 17:56:18.127: INFO: Pod "downward-api-78e45b9e-5048-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018403911s Jan 6 17:56:20.131: INFO: Pod "downward-api-78e45b9e-5048-11eb-8655-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.022185385s Jan 6 17:56:22.135: INFO: Pod "downward-api-78e45b9e-5048-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026829016s STEP: Saw pod success Jan 6 17:56:22.136: INFO: Pod "downward-api-78e45b9e-5048-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:56:22.138: INFO: Trying to get logs from node hunter-worker2 pod downward-api-78e45b9e-5048-11eb-8655-0242ac110009 container dapi-container: STEP: delete the pod Jan 6 17:56:22.201: INFO: Waiting for pod downward-api-78e45b9e-5048-11eb-8655-0242ac110009 to disappear Jan 6 17:56:22.209: INFO: Pod downward-api-78e45b9e-5048-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:56:22.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6qlxw" for this suite. Jan 6 17:56:28.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:56:28.297: INFO: namespace: e2e-tests-downward-api-6qlxw, resource: bindings, ignored listing per whitelist Jan 6 17:56:28.311: INFO: namespace e2e-tests-downward-api-6qlxw deletion completed in 6.099128752s • [SLOW TEST:12.291 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:56:28.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-f4qzb STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-f4qzb to expose endpoints map[] Jan 6 17:56:28.455: INFO: Get endpoints failed (17.170925ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 6 17:56:29.458: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-f4qzb exposes endpoints map[] (1.020879552s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-f4qzb STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-f4qzb to expose endpoints map[pod1:[100]] Jan 6 17:56:33.510: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-f4qzb exposes endpoints map[pod1:[100]] (4.044554663s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-f4qzb STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-f4qzb to expose endpoints map[pod1:[100] pod2:[101]] Jan 6 17:56:36.647: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-f4qzb exposes endpoints map[pod1:[100] pod2:[101]] (3.134067093s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-f4qzb STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-f4qzb to expose endpoints map[pod2:[101]] Jan 6 17:56:37.667: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-f4qzb exposes endpoints map[pod2:[101]] (1.016502553s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-f4qzb STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-f4qzb to expose endpoints map[] Jan 6 17:56:37.704: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-f4qzb exposes endpoints map[] (31.817941ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:56:37.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-f4qzb" for this suite. Jan 6 17:56:59.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:56:59.883: INFO: namespace: e2e-tests-services-f4qzb, resource: bindings, ignored listing per whitelist Jan 6 17:56:59.911: INFO: namespace e2e-tests-services-f4qzb deletion completed in 22.100121344s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:31.599 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:56:59.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 6 17:57:00.053: INFO: Waiting up to 5m0s for pod "pod-9315be60-5048-11eb-8655-0242ac110009" in namespace "e2e-tests-emptydir-n57jw" to be "success or failure" Jan 6 17:57:00.069: INFO: Pod "pod-9315be60-5048-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.33581ms Jan 6 17:57:02.227: INFO: Pod "pod-9315be60-5048-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173582102s Jan 6 17:57:04.230: INFO: Pod "pod-9315be60-5048-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.177259813s STEP: Saw pod success Jan 6 17:57:04.230: INFO: Pod "pod-9315be60-5048-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:57:04.233: INFO: Trying to get logs from node hunter-worker pod pod-9315be60-5048-11eb-8655-0242ac110009 container test-container: STEP: delete the pod Jan 6 17:57:04.264: INFO: Waiting for pod pod-9315be60-5048-11eb-8655-0242ac110009 to disappear Jan 6 17:57:04.279: INFO: Pod pod-9315be60-5048-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:57:04.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-n57jw" for this suite. Jan 6 17:57:10.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:57:10.445: INFO: namespace: e2e-tests-emptydir-n57jw, resource: bindings, ignored listing per whitelist Jan 6 17:57:10.478: INFO: namespace e2e-tests-emptydir-n57jw deletion completed in 6.194147521s • [SLOW TEST:10.567 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:57:10.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Jan 6 17:57:10.572: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:57:10.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gcctc" for this suite. Jan 6 17:57:16.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:57:16.713: INFO: namespace: e2e-tests-kubectl-gcctc, resource: bindings, ignored listing per whitelist Jan 6 17:57:16.777: INFO: namespace e2e-tests-kubectl-gcctc deletion completed in 6.108711492s • [SLOW TEST:6.299 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:57:16.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-rwpx4 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rwpx4 to expose endpoints map[] Jan 6 17:57:16.915: INFO: Get endpoints failed (7.275507ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jan 6 17:57:17.919: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rwpx4 exposes endpoints map[] (1.011117385s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-rwpx4 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rwpx4 to expose endpoints map[pod1:[80]] Jan 6 17:57:21.079: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rwpx4 exposes endpoints map[pod1:[80]] (3.154194692s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-rwpx4 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rwpx4 to expose endpoints map[pod1:[80] pod2:[80]] Jan 6 17:57:25.155: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rwpx4 exposes endpoints map[pod1:[80] pod2:[80]] (4.071631462s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-rwpx4 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rwpx4 to expose endpoints map[pod2:[80]] Jan 6 17:57:26.180: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rwpx4 exposes endpoints map[pod2:[80]] (1.020641719s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-rwpx4 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rwpx4 to expose endpoints map[] Jan 6 17:57:27.197: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rwpx4 exposes endpoints map[] (1.008507075s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:57:27.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-rwpx4" for this suite. Jan 6 17:57:49.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:57:49.297: INFO: namespace: e2e-tests-services-rwpx4, resource: bindings, ignored listing per whitelist Jan 6 17:57:49.340: INFO: namespace e2e-tests-services-rwpx4 deletion completed in 22.106634449s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:32.563 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:57:49.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 6 17:57:49.471: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b08a71fd-5048-11eb-8655-0242ac110009" in namespace "e2e-tests-downward-api-2hm57" to be "success or failure" Jan 6 17:57:49.490: INFO: Pod "downwardapi-volume-b08a71fd-5048-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 19.293141ms Jan 6 17:57:51.678: INFO: Pod "downwardapi-volume-b08a71fd-5048-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206871576s Jan 6 17:57:53.908: INFO: Pod "downwardapi-volume-b08a71fd-5048-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.437487274s STEP: Saw pod success Jan 6 17:57:53.908: INFO: Pod "downwardapi-volume-b08a71fd-5048-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:57:53.912: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-b08a71fd-5048-11eb-8655-0242ac110009 container client-container: STEP: delete the pod Jan 6 17:57:54.001: INFO: Waiting for pod downwardapi-volume-b08a71fd-5048-11eb-8655-0242ac110009 to disappear Jan 6 17:57:54.006: INFO: Pod downwardapi-volume-b08a71fd-5048-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:57:54.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2hm57" for this suite. Jan 6 17:58:00.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:58:00.078: INFO: namespace: e2e-tests-downward-api-2hm57, resource: bindings, ignored listing per whitelist Jan 6 17:58:00.132: INFO: namespace e2e-tests-downward-api-2hm57 deletion completed in 6.103724797s • [SLOW TEST:10.791 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:58:00.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 6 17:58:00.300: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 6 17:58:00.309: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:00.311: INFO: Number of nodes with available pods: 0 Jan 6 17:58:00.311: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:58:01.316: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:01.319: INFO: Number of nodes with available pods: 0 Jan 6 17:58:01.319: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:58:02.322: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:02.610: INFO: Number of nodes with available pods: 0 Jan 6 17:58:02.610: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:58:03.316: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:03.320: INFO: Number of nodes with available pods: 0 Jan 6 17:58:03.320: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:58:04.328: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:04.331: INFO: Number of nodes with available pods: 1 Jan 6 17:58:04.331: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:58:05.317: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:05.322: INFO: Number of nodes with available pods: 2 Jan 6 17:58:05.322: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 6 17:58:05.375: INFO: Wrong image for pod: daemon-set-77nwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:05.375: INFO: Wrong image for pod: daemon-set-xs82v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:05.381: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:06.385: INFO: Wrong image for pod: daemon-set-77nwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:06.385: INFO: Wrong image for pod: daemon-set-xs82v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:06.390: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:07.386: INFO: Wrong image for pod: daemon-set-77nwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:07.386: INFO: Wrong image for pod: daemon-set-xs82v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:07.390: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:08.398: INFO: Wrong image for pod: daemon-set-77nwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:08.398: INFO: Wrong image for pod: daemon-set-xs82v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:08.398: INFO: Pod daemon-set-xs82v is not available Jan 6 17:58:08.401: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:09.385: INFO: Wrong image for pod: daemon-set-77nwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:09.385: INFO: Wrong image for pod: daemon-set-xs82v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:09.385: INFO: Pod daemon-set-xs82v is not available Jan 6 17:58:09.389: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:10.386: INFO: Wrong image for pod: daemon-set-77nwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:10.386: INFO: Wrong image for pod: daemon-set-xs82v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:10.386: INFO: Pod daemon-set-xs82v is not available Jan 6 17:58:10.390: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:11.385: INFO: Wrong image for pod: daemon-set-77nwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:11.386: INFO: Wrong image for pod: daemon-set-xs82v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:11.386: INFO: Pod daemon-set-xs82v is not available Jan 6 17:58:11.390: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:12.385: INFO: Wrong image for pod: daemon-set-77nwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:12.385: INFO: Wrong image for pod: daemon-set-xs82v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:12.385: INFO: Pod daemon-set-xs82v is not available Jan 6 17:58:12.389: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:13.385: INFO: Wrong image for pod: daemon-set-77nwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:13.385: INFO: Wrong image for pod: daemon-set-xs82v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:13.385: INFO: Pod daemon-set-xs82v is not available Jan 6 17:58:13.390: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:14.385: INFO: Wrong image for pod: daemon-set-77nwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:14.385: INFO: Wrong image for pod: daemon-set-xs82v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:14.385: INFO: Pod daemon-set-xs82v is not available Jan 6 17:58:14.389: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:15.386: INFO: Wrong image for pod: daemon-set-77nwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:15.386: INFO: Pod daemon-set-r8nxf is not available Jan 6 17:58:15.390: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:16.447: INFO: Wrong image for pod: daemon-set-77nwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:16.447: INFO: Pod daemon-set-r8nxf is not available Jan 6 17:58:16.451: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:17.386: INFO: Wrong image for pod: daemon-set-77nwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:17.386: INFO: Pod daemon-set-r8nxf is not available Jan 6 17:58:17.389: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:18.385: INFO: Wrong image for pod: daemon-set-77nwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:18.385: INFO: Pod daemon-set-r8nxf is not available Jan 6 17:58:18.389: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:19.385: INFO: Wrong image for pod: daemon-set-77nwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:19.389: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:20.384: INFO: Wrong image for pod: daemon-set-77nwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 6 17:58:20.384: INFO: Pod daemon-set-77nwx is not available Jan 6 17:58:20.388: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:21.385: INFO: Pod daemon-set-phwhx is not available Jan 6 17:58:21.388: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jan 6 17:58:21.392: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:21.394: INFO: Number of nodes with available pods: 1 Jan 6 17:58:21.394: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:58:22.400: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:22.404: INFO: Number of nodes with available pods: 1 Jan 6 17:58:22.404: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:58:23.412: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:23.415: INFO: Number of nodes with available pods: 1 Jan 6 17:58:23.415: INFO: Node hunter-worker is running more than one daemon pod Jan 6 17:58:24.400: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 6 17:58:24.404: INFO: Number of nodes with available pods: 2 Jan 6 17:58:24.404: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-sg672, will wait for the garbage collector to delete the pods Jan 6 17:58:24.480: INFO: Deleting DaemonSet.extensions daemon-set took: 6.975311ms Jan 6 17:58:24.580: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.287038ms Jan 6 17:58:34.986: INFO: Number of nodes with available pods: 0 Jan 6 17:58:34.986: INFO: Number of running nodes: 0, number of available pods: 0 Jan 6 17:58:34.988: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-sg672/daemonsets","resourceVersion":"18056458"},"items":null} Jan 6 17:58:34.991: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-sg672/pods","resourceVersion":"18056458"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:58:35.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-sg672" for this suite. Jan 6 17:58:41.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:58:41.111: INFO: namespace: e2e-tests-daemonsets-sg672, resource: bindings, ignored listing per whitelist Jan 6 17:58:41.118: INFO: namespace e2e-tests-daemonsets-sg672 deletion completed in 6.11335846s • [SLOW TEST:40.986 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:58:41.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-srhg STEP: Creating a pod to test atomic-volume-subpath Jan 6 17:58:41.226: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-srhg" in namespace "e2e-tests-subpath-8hnzj" to be "success or failure" Jan 6 17:58:41.229: INFO: Pod "pod-subpath-test-projected-srhg": Phase="Pending", Reason="", readiness=false. Elapsed: 3.024892ms Jan 6 17:58:43.233: INFO: Pod "pod-subpath-test-projected-srhg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007025426s Jan 6 17:58:45.244: INFO: Pod "pod-subpath-test-projected-srhg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018152648s Jan 6 17:58:47.249: INFO: Pod "pod-subpath-test-projected-srhg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022924013s Jan 6 17:58:49.253: INFO: Pod "pod-subpath-test-projected-srhg": Phase="Running", Reason="", readiness=false. Elapsed: 8.026968896s Jan 6 17:58:51.257: INFO: Pod "pod-subpath-test-projected-srhg": Phase="Running", Reason="", readiness=false. Elapsed: 10.031235848s Jan 6 17:58:53.262: INFO: Pod "pod-subpath-test-projected-srhg": Phase="Running", Reason="", readiness=false. Elapsed: 12.035737485s Jan 6 17:58:55.266: INFO: Pod "pod-subpath-test-projected-srhg": Phase="Running", Reason="", readiness=false. Elapsed: 14.039715886s Jan 6 17:58:57.270: INFO: Pod "pod-subpath-test-projected-srhg": Phase="Running", Reason="", readiness=false. Elapsed: 16.043960382s Jan 6 17:58:59.273: INFO: Pod "pod-subpath-test-projected-srhg": Phase="Running", Reason="", readiness=false. Elapsed: 18.047468891s Jan 6 17:59:01.278: INFO: Pod "pod-subpath-test-projected-srhg": Phase="Running", Reason="", readiness=false. Elapsed: 20.051668473s Jan 6 17:59:03.282: INFO: Pod "pod-subpath-test-projected-srhg": Phase="Running", Reason="", readiness=false. Elapsed: 22.056282262s Jan 6 17:59:05.286: INFO: Pod "pod-subpath-test-projected-srhg": Phase="Running", Reason="", readiness=false. Elapsed: 24.060508189s Jan 6 17:59:07.290: INFO: Pod "pod-subpath-test-projected-srhg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.064378762s STEP: Saw pod success Jan 6 17:59:07.290: INFO: Pod "pod-subpath-test-projected-srhg" satisfied condition "success or failure" Jan 6 17:59:07.293: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-srhg container test-container-subpath-projected-srhg: STEP: delete the pod Jan 6 17:59:07.332: INFO: Waiting for pod pod-subpath-test-projected-srhg to disappear Jan 6 17:59:07.472: INFO: Pod pod-subpath-test-projected-srhg no longer exists STEP: Deleting pod pod-subpath-test-projected-srhg Jan 6 17:59:07.472: INFO: Deleting pod "pod-subpath-test-projected-srhg" in namespace "e2e-tests-subpath-8hnzj" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:59:07.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-8hnzj" for this suite. Jan 6 17:59:13.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:59:13.539: INFO: namespace: e2e-tests-subpath-8hnzj, resource: bindings, ignored listing per whitelist Jan 6 17:59:13.603: INFO: namespace e2e-tests-subpath-8hnzj deletion completed in 6.123276489s • [SLOW TEST:32.485 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:59:13.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-4chpl I0106 17:59:13.748401 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-4chpl, replica count: 1 I0106 17:59:14.798835 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0106 17:59:15.799050 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0106 17:59:16.799264 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0106 17:59:17.799474 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 6 17:59:17.930: INFO: Created: latency-svc-rtf4q Jan 6 17:59:17.973: INFO: Got endpoints: latency-svc-rtf4q [73.478691ms] Jan 6 17:59:18.053: INFO: Created: latency-svc-fsq6d Jan 6 17:59:18.104: INFO: Got endpoints: latency-svc-fsq6d [130.936032ms] Jan 6 17:59:18.105: INFO: Created: latency-svc-8nw2z Jan 6 17:59:18.120: INFO: Got endpoints: latency-svc-8nw2z [146.242586ms] Jan 6 17:59:18.140: INFO: Created: latency-svc-4xfz4 Jan 6 17:59:18.184: INFO: Got endpoints: latency-svc-4xfz4 [211.24389ms] Jan 6 17:59:18.188: INFO: Created: latency-svc-kzlmv Jan 6 17:59:18.201: INFO: Got endpoints: latency-svc-kzlmv [228.000751ms] Jan 6 17:59:18.229: INFO: Created: latency-svc-kn782 Jan 6 17:59:18.249: INFO: Got endpoints: latency-svc-kn782 [276.436169ms] Jan 6 17:59:18.281: INFO: Created: latency-svc-99dhb Jan 6 17:59:18.358: INFO: Got endpoints: latency-svc-99dhb [385.095712ms] Jan 6 17:59:18.361: INFO: Created: latency-svc-6pm6g Jan 6 17:59:18.394: INFO: Got endpoints: latency-svc-6pm6g [420.664726ms] Jan 6 17:59:18.416: INFO: Created: latency-svc-l58fc Jan 6 17:59:18.443: INFO: Got endpoints: latency-svc-l58fc [469.89207ms] Jan 6 17:59:18.502: INFO: Created: latency-svc-v2tcj Jan 6 17:59:18.514: INFO: Got endpoints: latency-svc-v2tcj [540.880192ms] Jan 6 17:59:18.542: INFO: Created: latency-svc-tmpvb Jan 6 17:59:18.557: INFO: Got endpoints: latency-svc-tmpvb [583.254238ms] Jan 6 17:59:18.578: INFO: Created: latency-svc-hr97v Jan 6 17:59:18.602: INFO: Got endpoints: latency-svc-hr97v [628.295393ms] Jan 6 17:59:18.666: INFO: Created: latency-svc-5nr4s Jan 6 17:59:18.683: INFO: Got endpoints: latency-svc-5nr4s [709.274284ms] Jan 6 17:59:18.713: INFO: Created: latency-svc-btg8g Jan 6 17:59:18.738: INFO: Got endpoints: latency-svc-btg8g [764.002868ms] Jan 6 17:59:18.802: INFO: Created: latency-svc-bfsk6 Jan 6 17:59:18.809: INFO: Got endpoints: latency-svc-bfsk6 [836.441468ms] Jan 6 17:59:18.839: INFO: Created: latency-svc-p9t67 Jan 6 17:59:18.851: INFO: Got endpoints: latency-svc-p9t67 [878.093778ms] Jan 6 17:59:18.887: INFO: Created: latency-svc-rvskw Jan 6 17:59:18.957: INFO: Got endpoints: latency-svc-rvskw [852.755622ms] Jan 6 17:59:18.974: INFO: Created: latency-svc-9v4mt Jan 6 17:59:18.990: INFO: Got endpoints: latency-svc-9v4mt [870.241215ms] Jan 6 17:59:19.028: INFO: Created: latency-svc-wps2k Jan 6 17:59:19.032: INFO: Got endpoints: latency-svc-wps2k [75.158997ms] Jan 6 17:59:19.055: INFO: Created: latency-svc-9vlnd Jan 6 17:59:19.125: INFO: Got endpoints: latency-svc-9vlnd [940.302891ms] Jan 6 17:59:19.128: INFO: Created: latency-svc-8xpcl Jan 6 17:59:19.147: INFO: Got endpoints: latency-svc-8xpcl [945.727048ms] Jan 6 17:59:19.172: INFO: Created: latency-svc-ndmlf Jan 6 17:59:19.189: INFO: Got endpoints: latency-svc-ndmlf [939.568774ms] Jan 6 17:59:19.214: INFO: Created: latency-svc-tgzhd Jan 6 17:59:19.298: INFO: Got endpoints: latency-svc-tgzhd [939.996277ms] Jan 6 17:59:19.302: INFO: Created: latency-svc-ctl4m Jan 6 17:59:19.322: INFO: Got endpoints: latency-svc-ctl4m [927.734338ms] Jan 6 17:59:19.356: INFO: Created: latency-svc-br9hs Jan 6 17:59:19.388: INFO: Got endpoints: latency-svc-br9hs [944.923788ms] Jan 6 17:59:19.448: INFO: Created: latency-svc-2c546 Jan 6 17:59:19.454: INFO: Got endpoints: latency-svc-2c546 [939.165289ms] Jan 6 17:59:19.478: INFO: Created: latency-svc-wgrvn Jan 6 17:59:19.490: INFO: Got endpoints: latency-svc-wgrvn [932.921437ms] Jan 6 17:59:19.511: INFO: Created: latency-svc-cwwp8 Jan 6 17:59:19.526: INFO: Got endpoints: latency-svc-cwwp8 [924.872502ms] Jan 6 17:59:19.548: INFO: Created: latency-svc-4455r Jan 6 17:59:19.621: INFO: Got endpoints: latency-svc-4455r [938.525593ms] Jan 6 17:59:19.646: INFO: Created: latency-svc-28f7j Jan 6 17:59:19.678: INFO: Got endpoints: latency-svc-28f7j [940.625272ms] Jan 6 17:59:19.718: INFO: Created: latency-svc-j58jb Jan 6 17:59:19.783: INFO: Got endpoints: latency-svc-j58jb [973.427673ms] Jan 6 17:59:19.808: INFO: Created: latency-svc-bfg4p Jan 6 17:59:19.824: INFO: Got endpoints: latency-svc-bfg4p [972.286734ms] Jan 6 17:59:19.850: INFO: Created: latency-svc-9855g Jan 6 17:59:19.866: INFO: Got endpoints: latency-svc-9855g [876.080219ms] Jan 6 17:59:19.957: INFO: Created: latency-svc-2bz2b Jan 6 17:59:19.960: INFO: Got endpoints: latency-svc-2bz2b [927.491148ms] Jan 6 17:59:20.009: INFO: Created: latency-svc-7ls9k Jan 6 17:59:20.022: INFO: Got endpoints: latency-svc-7ls9k [897.450807ms] Jan 6 17:59:20.054: INFO: Created: latency-svc-jvfrl Jan 6 17:59:20.100: INFO: Got endpoints: latency-svc-jvfrl [953.32692ms] Jan 6 17:59:20.114: INFO: Created: latency-svc-jj2h7 Jan 6 17:59:20.131: INFO: Got endpoints: latency-svc-jj2h7 [941.794865ms] Jan 6 17:59:20.153: INFO: Created: latency-svc-fz6zk Jan 6 17:59:20.184: INFO: Got endpoints: latency-svc-fz6zk [885.070284ms] Jan 6 17:59:20.251: INFO: Created: latency-svc-2ccmn Jan 6 17:59:20.254: INFO: Got endpoints: latency-svc-2ccmn [932.207487ms] Jan 6 17:59:20.281: INFO: Created: latency-svc-j2cwf Jan 6 17:59:20.294: INFO: Got endpoints: latency-svc-j2cwf [905.644142ms] Jan 6 17:59:20.326: INFO: Created: latency-svc-5cnws Jan 6 17:59:20.336: INFO: Got endpoints: latency-svc-5cnws [882.538104ms] Jan 6 17:59:20.394: INFO: Created: latency-svc-gxm8n Jan 6 17:59:20.411: INFO: Got endpoints: latency-svc-gxm8n [920.644945ms] Jan 6 17:59:20.441: INFO: Created: latency-svc-9wl52 Jan 6 17:59:20.450: INFO: Got endpoints: latency-svc-9wl52 [923.72826ms] Jan 6 17:59:20.474: INFO: Created: latency-svc-zgfx8 Jan 6 17:59:20.487: INFO: Got endpoints: latency-svc-zgfx8 [865.707786ms] Jan 6 17:59:20.538: INFO: Created: latency-svc-4wkt9 Jan 6 17:59:20.564: INFO: Created: latency-svc-c4j57 Jan 6 17:59:20.564: INFO: Got endpoints: latency-svc-4wkt9 [886.149985ms] Jan 6 17:59:20.578: INFO: Got endpoints: latency-svc-c4j57 [794.61185ms] Jan 6 17:59:20.597: INFO: Created: latency-svc-tjj9r Jan 6 17:59:20.614: INFO: Got endpoints: latency-svc-tjj9r [790.468732ms] Jan 6 17:59:20.682: INFO: Created: latency-svc-2m7qc Jan 6 17:59:20.685: INFO: Got endpoints: latency-svc-2m7qc [818.519418ms] Jan 6 17:59:20.707: INFO: Created: latency-svc-4wjfg Jan 6 17:59:20.722: INFO: Got endpoints: latency-svc-4wjfg [762.419646ms] Jan 6 17:59:20.744: INFO: Created: latency-svc-qw29r Jan 6 17:59:20.774: INFO: Got endpoints: latency-svc-qw29r [751.211163ms] Jan 6 17:59:20.868: INFO: Created: latency-svc-dptg9 Jan 6 17:59:20.870: INFO: Got endpoints: latency-svc-dptg9 [770.085793ms] Jan 6 17:59:20.924: INFO: Created: latency-svc-tr6jk Jan 6 17:59:20.945: INFO: Got endpoints: latency-svc-tr6jk [814.1217ms] Jan 6 17:59:21.017: INFO: Created: latency-svc-kbnkr Jan 6 17:59:21.029: INFO: Got endpoints: latency-svc-kbnkr [845.650867ms] Jan 6 17:59:21.058: INFO: Created: latency-svc-qhwz8 Jan 6 17:59:21.071: INFO: Got endpoints: latency-svc-qhwz8 [817.005902ms] Jan 6 17:59:21.095: INFO: Created: latency-svc-whrsr Jan 6 17:59:21.161: INFO: Got endpoints: latency-svc-whrsr [866.791403ms] Jan 6 17:59:21.163: INFO: Created: latency-svc-2q2nm Jan 6 17:59:21.180: INFO: Got endpoints: latency-svc-2q2nm [843.95507ms] Jan 6 17:59:21.200: INFO: Created: latency-svc-99dcz Jan 6 17:59:21.216: INFO: Got endpoints: latency-svc-99dcz [805.649533ms] Jan 6 17:59:21.239: INFO: Created: latency-svc-sghxw Jan 6 17:59:21.253: INFO: Got endpoints: latency-svc-sghxw [802.232909ms] Jan 6 17:59:21.299: INFO: Created: latency-svc-r78z8 Jan 6 17:59:21.313: INFO: Got endpoints: latency-svc-r78z8 [825.694065ms] Jan 6 17:59:21.344: INFO: Created: latency-svc-t24ms Jan 6 17:59:21.355: INFO: Got endpoints: latency-svc-t24ms [790.436806ms] Jan 6 17:59:21.381: INFO: Created: latency-svc-b829l Jan 6 17:59:21.391: INFO: Got endpoints: latency-svc-b829l [813.649503ms] Jan 6 17:59:21.442: INFO: Created: latency-svc-zplts Jan 6 17:59:21.446: INFO: Got endpoints: latency-svc-zplts [831.140566ms] Jan 6 17:59:21.485: INFO: Created: latency-svc-d8hn4 Jan 6 17:59:21.500: INFO: Got endpoints: latency-svc-d8hn4 [815.426748ms] Jan 6 17:59:21.521: INFO: Created: latency-svc-s7lzn Jan 6 17:59:21.530: INFO: Got endpoints: latency-svc-s7lzn [807.942092ms] Jan 6 17:59:21.580: INFO: Created: latency-svc-62qfx Jan 6 17:59:21.583: INFO: Got endpoints: latency-svc-62qfx [808.970215ms] Jan 6 17:59:21.620: INFO: Created: latency-svc-hsfcj Jan 6 17:59:21.650: INFO: Got endpoints: latency-svc-hsfcj [779.233738ms] Jan 6 17:59:21.758: INFO: Created: latency-svc-f9rw9 Jan 6 17:59:21.758: INFO: Got endpoints: latency-svc-f9rw9 [812.495918ms] Jan 6 17:59:21.812: INFO: Created: latency-svc-dl546 Jan 6 17:59:21.825: INFO: Got endpoints: latency-svc-dl546 [796.128172ms] Jan 6 17:59:21.854: INFO: Created: latency-svc-q7r4j Jan 6 17:59:21.904: INFO: Got endpoints: latency-svc-q7r4j [833.11827ms] Jan 6 17:59:21.905: INFO: Created: latency-svc-2cwzh Jan 6 17:59:21.922: INFO: Got endpoints: latency-svc-2cwzh [760.793216ms] Jan 6 17:59:21.953: INFO: Created: latency-svc-gc8wq Jan 6 17:59:21.964: INFO: Got endpoints: latency-svc-gc8wq [783.556182ms] Jan 6 17:59:22.035: INFO: Created: latency-svc-r2tlx Jan 6 17:59:22.043: INFO: Got endpoints: latency-svc-r2tlx [826.024663ms] Jan 6 17:59:22.069: INFO: Created: latency-svc-n622b Jan 6 17:59:22.091: INFO: Got endpoints: latency-svc-n622b [838.269064ms] Jan 6 17:59:22.121: INFO: Created: latency-svc-nj8r8 Jan 6 17:59:22.172: INFO: Got endpoints: latency-svc-nj8r8 [859.256787ms] Jan 6 17:59:22.187: INFO: Created: latency-svc-595dj Jan 6 17:59:22.199: INFO: Got endpoints: latency-svc-595dj [844.036752ms] Jan 6 17:59:22.219: INFO: Created: latency-svc-dsh7d Jan 6 17:59:22.235: INFO: Got endpoints: latency-svc-dsh7d [843.914336ms] Jan 6 17:59:22.268: INFO: Created: latency-svc-mbwnz Jan 6 17:59:22.328: INFO: Got endpoints: latency-svc-mbwnz [882.439921ms] Jan 6 17:59:22.350: INFO: Created: latency-svc-9rfkc Jan 6 17:59:22.604: INFO: Got endpoints: latency-svc-9rfkc [1.103313463s] Jan 6 17:59:23.114: INFO: Created: latency-svc-58ppj Jan 6 17:59:23.123: INFO: Got endpoints: latency-svc-58ppj [1.593029909s] Jan 6 17:59:23.148: INFO: Created: latency-svc-7998p Jan 6 17:59:23.171: INFO: Got endpoints: latency-svc-7998p [1.58857884s] Jan 6 17:59:23.290: INFO: Created: latency-svc-n59zv Jan 6 17:59:23.303: INFO: Got endpoints: latency-svc-n59zv [1.653503585s] Jan 6 17:59:23.390: INFO: Created: latency-svc-w54kq Jan 6 17:59:23.391: INFO: Got endpoints: latency-svc-w54kq [1.633006848s] Jan 6 17:59:23.424: INFO: Created: latency-svc-5ntfl Jan 6 17:59:23.442: INFO: Got endpoints: latency-svc-5ntfl [1.616235397s] Jan 6 17:59:23.461: INFO: Created: latency-svc-xf5x4 Jan 6 17:59:23.478: INFO: Got endpoints: latency-svc-xf5x4 [1.57358449s] Jan 6 17:59:23.526: INFO: Created: latency-svc-888js Jan 6 17:59:23.529: INFO: Got endpoints: latency-svc-888js [1.606830223s] Jan 6 17:59:23.554: INFO: Created: latency-svc-hxbqn Jan 6 17:59:23.562: INFO: Got endpoints: latency-svc-hxbqn [1.598297247s] Jan 6 17:59:23.584: INFO: Created: latency-svc-nqfb4 Jan 6 17:59:23.593: INFO: Got endpoints: latency-svc-nqfb4 [1.550264679s] Jan 6 17:59:23.736: INFO: Created: latency-svc-2t9fb Jan 6 17:59:23.750: INFO: Got endpoints: latency-svc-2t9fb [1.659528609s] Jan 6 17:59:23.891: INFO: Created: latency-svc-lqvm7 Jan 6 17:59:23.897: INFO: Got endpoints: latency-svc-lqvm7 [1.725184248s] Jan 6 17:59:23.941: INFO: Created: latency-svc-p4ckh Jan 6 17:59:23.959: INFO: Got endpoints: latency-svc-p4ckh [1.760290341s] Jan 6 17:59:23.979: INFO: Created: latency-svc-8w66k Jan 6 17:59:24.036: INFO: Got endpoints: latency-svc-8w66k [1.80036029s] Jan 6 17:59:24.060: INFO: Created: latency-svc-4lmvz Jan 6 17:59:24.081: INFO: Got endpoints: latency-svc-4lmvz [1.752496243s] Jan 6 17:59:24.115: INFO: Created: latency-svc-slw7t Jan 6 17:59:24.128: INFO: Got endpoints: latency-svc-slw7t [1.524223115s] Jan 6 17:59:24.178: INFO: Created: latency-svc-8nff5 Jan 6 17:59:24.181: INFO: Got endpoints: latency-svc-8nff5 [1.057851103s] Jan 6 17:59:24.206: INFO: Created: latency-svc-4dptd Jan 6 17:59:24.238: INFO: Got endpoints: latency-svc-4dptd [1.066499669s] Jan 6 17:59:24.539: INFO: Created: latency-svc-77g7x Jan 6 17:59:24.543: INFO: Got endpoints: latency-svc-77g7x [1.239428093s] Jan 6 17:59:24.583: INFO: Created: latency-svc-nqn59 Jan 6 17:59:24.597: INFO: Got endpoints: latency-svc-nqn59 [1.206041016s] Jan 6 17:59:24.616: INFO: Created: latency-svc-schr6 Jan 6 17:59:24.633: INFO: Got endpoints: latency-svc-schr6 [1.191625065s] Jan 6 17:59:24.677: INFO: Created: latency-svc-77nqg Jan 6 17:59:24.681: INFO: Got endpoints: latency-svc-77nqg [1.203082245s] Jan 6 17:59:24.699: INFO: Created: latency-svc-gxkgt Jan 6 17:59:24.724: INFO: Got endpoints: latency-svc-gxkgt [1.195081026s] Jan 6 17:59:24.769: INFO: Created: latency-svc-5b949 Jan 6 17:59:24.831: INFO: Got endpoints: latency-svc-5b949 [1.26883687s] Jan 6 17:59:24.833: INFO: Created: latency-svc-rgkbz Jan 6 17:59:24.850: INFO: Got endpoints: latency-svc-rgkbz [1.257112542s] Jan 6 17:59:24.892: INFO: Created: latency-svc-f6x92 Jan 6 17:59:24.904: INFO: Got endpoints: latency-svc-f6x92 [1.153484126s] Jan 6 17:59:24.928: INFO: Created: latency-svc-sfmbn Jan 6 17:59:24.969: INFO: Got endpoints: latency-svc-sfmbn [1.071108584s] Jan 6 17:59:24.985: INFO: Created: latency-svc-7drt6 Jan 6 17:59:25.001: INFO: Got endpoints: latency-svc-7drt6 [1.041388213s] Jan 6 17:59:25.021: INFO: Created: latency-svc-zmr2g Jan 6 17:59:25.031: INFO: Got endpoints: latency-svc-zmr2g [995.092706ms] Jan 6 17:59:25.054: INFO: Created: latency-svc-bmh8b Jan 6 17:59:25.069: INFO: Got endpoints: latency-svc-bmh8b [988.325844ms] Jan 6 17:59:25.120: INFO: Created: latency-svc-x7sp4 Jan 6 17:59:25.121: INFO: Got endpoints: latency-svc-x7sp4 [993.38396ms] Jan 6 17:59:25.275: INFO: Created: latency-svc-bxs7k Jan 6 17:59:25.302: INFO: Got endpoints: latency-svc-bxs7k [1.120458689s] Jan 6 17:59:25.329: INFO: Created: latency-svc-ff4qh Jan 6 17:59:25.344: INFO: Got endpoints: latency-svc-ff4qh [1.105982843s] Jan 6 17:59:25.431: INFO: Created: latency-svc-zc296 Jan 6 17:59:25.433: INFO: Got endpoints: latency-svc-zc296 [890.630332ms] Jan 6 17:59:25.453: INFO: Created: latency-svc-vxmvs Jan 6 17:59:25.477: INFO: Got endpoints: latency-svc-vxmvs [880.064858ms] Jan 6 17:59:25.501: INFO: Created: latency-svc-7nc26 Jan 6 17:59:25.514: INFO: Got endpoints: latency-svc-7nc26 [880.75252ms] Jan 6 17:59:25.580: INFO: Created: latency-svc-7phdp Jan 6 17:59:25.582: INFO: Got endpoints: latency-svc-7phdp [901.31968ms] Jan 6 17:59:25.605: INFO: Created: latency-svc-h5n4d Jan 6 17:59:25.622: INFO: Got endpoints: latency-svc-h5n4d [898.528653ms] Jan 6 17:59:25.647: INFO: Created: latency-svc-t8lkg Jan 6 17:59:25.741: INFO: Got endpoints: latency-svc-t8lkg [910.149303ms] Jan 6 17:59:25.742: INFO: Created: latency-svc-jgx25 Jan 6 17:59:25.755: INFO: Got endpoints: latency-svc-jgx25 [904.658113ms] Jan 6 17:59:25.785: INFO: Created: latency-svc-mmkp8 Jan 6 17:59:25.797: INFO: Got endpoints: latency-svc-mmkp8 [893.157302ms] Jan 6 17:59:25.817: INFO: Created: latency-svc-qcftc Jan 6 17:59:25.827: INFO: Got endpoints: latency-svc-qcftc [858.753828ms] Jan 6 17:59:25.885: INFO: Created: latency-svc-d6cct Jan 6 17:59:25.887: INFO: Got endpoints: latency-svc-d6cct [886.58535ms] Jan 6 17:59:25.957: INFO: Created: latency-svc-7dtgr Jan 6 17:59:25.972: INFO: Got endpoints: latency-svc-7dtgr [940.931526ms] Jan 6 17:59:26.031: INFO: Created: latency-svc-d8688 Jan 6 17:59:26.044: INFO: Got endpoints: latency-svc-d8688 [975.364354ms] Jan 6 17:59:26.067: INFO: Created: latency-svc-tmxz6 Jan 6 17:59:26.081: INFO: Got endpoints: latency-svc-tmxz6 [959.247164ms] Jan 6 17:59:26.097: INFO: Created: latency-svc-jt5z7 Jan 6 17:59:26.111: INFO: Got endpoints: latency-svc-jt5z7 [808.894394ms] Jan 6 17:59:26.167: INFO: Created: latency-svc-7l85v Jan 6 17:59:26.169: INFO: Got endpoints: latency-svc-7l85v [825.560584ms] Jan 6 17:59:26.203: INFO: Created: latency-svc-xclwp Jan 6 17:59:26.219: INFO: Got endpoints: latency-svc-xclwp [785.768577ms] Jan 6 17:59:26.242: INFO: Created: latency-svc-xkk2r Jan 6 17:59:26.255: INFO: Got endpoints: latency-svc-xkk2r [778.330875ms] Jan 6 17:59:26.310: INFO: Created: latency-svc-bgkj7 Jan 6 17:59:26.313: INFO: Got endpoints: latency-svc-bgkj7 [798.4988ms] Jan 6 17:59:26.353: INFO: Created: latency-svc-6fnhf Jan 6 17:59:26.370: INFO: Got endpoints: latency-svc-6fnhf [787.429422ms] Jan 6 17:59:26.448: INFO: Created: latency-svc-fdlcz Jan 6 17:59:26.450: INFO: Got endpoints: latency-svc-fdlcz [827.878572ms] Jan 6 17:59:26.475: INFO: Created: latency-svc-c2958 Jan 6 17:59:26.499: INFO: Got endpoints: latency-svc-c2958 [757.659611ms] Jan 6 17:59:26.529: INFO: Created: latency-svc-rhqrk Jan 6 17:59:26.539: INFO: Got endpoints: latency-svc-rhqrk [784.10854ms] Jan 6 17:59:26.580: INFO: Created: latency-svc-fcpt6 Jan 6 17:59:26.582: INFO: Got endpoints: latency-svc-fcpt6 [784.569591ms] Jan 6 17:59:26.605: INFO: Created: latency-svc-26pc6 Jan 6 17:59:26.617: INFO: Got endpoints: latency-svc-26pc6 [789.553206ms] Jan 6 17:59:26.644: INFO: Created: latency-svc-2fjfb Jan 6 17:59:26.667: INFO: Got endpoints: latency-svc-2fjfb [779.55922ms] Jan 6 17:59:26.730: INFO: Created: latency-svc-gk8s2 Jan 6 17:59:26.744: INFO: Got endpoints: latency-svc-gk8s2 [771.634099ms] Jan 6 17:59:26.761: INFO: Created: latency-svc-9n7cf Jan 6 17:59:26.773: INFO: Got endpoints: latency-svc-9n7cf [729.013144ms] Jan 6 17:59:26.791: INFO: Created: latency-svc-7glbm Jan 6 17:59:26.804: INFO: Got endpoints: latency-svc-7glbm [723.238481ms] Jan 6 17:59:26.827: INFO: Created: latency-svc-dtchv Jan 6 17:59:26.891: INFO: Got endpoints: latency-svc-dtchv [780.175031ms] Jan 6 17:59:26.907: INFO: Created: latency-svc-jcrtp Jan 6 17:59:26.919: INFO: Got endpoints: latency-svc-jcrtp [749.143376ms] Jan 6 17:59:26.947: INFO: Created: latency-svc-z2g4k Jan 6 17:59:26.961: INFO: Got endpoints: latency-svc-z2g4k [741.620279ms] Jan 6 17:59:26.983: INFO: Created: latency-svc-xckv6 Jan 6 17:59:27.035: INFO: Got endpoints: latency-svc-xckv6 [779.144842ms] Jan 6 17:59:27.054: INFO: Created: latency-svc-7l6s7 Jan 6 17:59:27.069: INFO: Got endpoints: latency-svc-7l6s7 [756.644965ms] Jan 6 17:59:27.123: INFO: Created: latency-svc-crk8q Jan 6 17:59:27.166: INFO: Got endpoints: latency-svc-crk8q [796.265465ms] Jan 6 17:59:27.177: INFO: Created: latency-svc-fhsh8 Jan 6 17:59:27.190: INFO: Got endpoints: latency-svc-fhsh8 [739.375963ms] Jan 6 17:59:27.232: INFO: Created: latency-svc-dztg6 Jan 6 17:59:27.244: INFO: Got endpoints: latency-svc-dztg6 [744.67624ms] Jan 6 17:59:27.341: INFO: Created: latency-svc-z4md5 Jan 6 17:59:27.378: INFO: Got endpoints: latency-svc-z4md5 [839.239003ms] Jan 6 17:59:27.415: INFO: Created: latency-svc-sqg5b Jan 6 17:59:27.466: INFO: Got endpoints: latency-svc-sqg5b [883.750766ms] Jan 6 17:59:27.477: INFO: Created: latency-svc-lxgd5 Jan 6 17:59:27.497: INFO: Got endpoints: latency-svc-lxgd5 [879.598936ms] Jan 6 17:59:27.513: INFO: Created: latency-svc-gmft5 Jan 6 17:59:27.527: INFO: Got endpoints: latency-svc-gmft5 [859.665512ms] Jan 6 17:59:27.552: INFO: Created: latency-svc-cmz7w Jan 6 17:59:27.598: INFO: Got endpoints: latency-svc-cmz7w [853.979061ms] Jan 6 17:59:27.631: INFO: Created: latency-svc-tsprc Jan 6 17:59:27.669: INFO: Got endpoints: latency-svc-tsprc [895.409339ms] Jan 6 17:59:27.735: INFO: Created: latency-svc-rsj5f Jan 6 17:59:27.738: INFO: Got endpoints: latency-svc-rsj5f [934.180258ms] Jan 6 17:59:27.787: INFO: Created: latency-svc-8lwtf Jan 6 17:59:27.804: INFO: Got endpoints: latency-svc-8lwtf [912.925149ms] Jan 6 17:59:27.835: INFO: Created: latency-svc-74dc4 Jan 6 17:59:27.909: INFO: Got endpoints: latency-svc-74dc4 [990.281093ms] Jan 6 17:59:27.914: INFO: Created: latency-svc-hj7n6 Jan 6 17:59:27.927: INFO: Got endpoints: latency-svc-hj7n6 [966.27632ms] Jan 6 17:59:27.957: INFO: Created: latency-svc-qq9lr Jan 6 17:59:27.966: INFO: Got endpoints: latency-svc-qq9lr [931.660768ms] Jan 6 17:59:27.988: INFO: Created: latency-svc-x95mk Jan 6 17:59:28.055: INFO: Created: latency-svc-cmq7d Jan 6 17:59:28.077: INFO: Got endpoints: latency-svc-x95mk [1.007443984s] Jan 6 17:59:28.078: INFO: Created: latency-svc-qjpwz Jan 6 17:59:28.093: INFO: Got endpoints: latency-svc-qjpwz [903.43083ms] Jan 6 17:59:28.094: INFO: Got endpoints: latency-svc-cmq7d [927.408772ms] Jan 6 17:59:28.197: INFO: Created: latency-svc-qs8gr Jan 6 17:59:28.200: INFO: Got endpoints: latency-svc-qs8gr [956.501797ms] Jan 6 17:59:28.225: INFO: Created: latency-svc-kjwkb Jan 6 17:59:28.238: INFO: Got endpoints: latency-svc-kjwkb [859.2264ms] Jan 6 17:59:28.255: INFO: Created: latency-svc-vw9vc Jan 6 17:59:28.268: INFO: Got endpoints: latency-svc-vw9vc [802.265164ms] Jan 6 17:59:28.341: INFO: Created: latency-svc-qzsrt Jan 6 17:59:28.347: INFO: Got endpoints: latency-svc-qzsrt [850.240364ms] Jan 6 17:59:28.371: INFO: Created: latency-svc-zzszm Jan 6 17:59:28.388: INFO: Got endpoints: latency-svc-zzszm [861.364083ms] Jan 6 17:59:28.410: INFO: Created: latency-svc-cfwsk Jan 6 17:59:28.425: INFO: Got endpoints: latency-svc-cfwsk [827.537315ms] Jan 6 17:59:28.490: INFO: Created: latency-svc-vg4l8 Jan 6 17:59:28.497: INFO: Got endpoints: latency-svc-vg4l8 [827.675482ms] Jan 6 17:59:28.533: INFO: Created: latency-svc-dvbcd Jan 6 17:59:28.545: INFO: Got endpoints: latency-svc-dvbcd [807.108605ms] Jan 6 17:59:28.582: INFO: Created: latency-svc-tdx4k Jan 6 17:59:28.663: INFO: Got endpoints: latency-svc-tdx4k [859.294373ms] Jan 6 17:59:28.666: INFO: Created: latency-svc-bmkl9 Jan 6 17:59:28.671: INFO: Got endpoints: latency-svc-bmkl9 [762.190175ms] Jan 6 17:59:28.693: INFO: Created: latency-svc-nwcr8 Jan 6 17:59:28.708: INFO: Got endpoints: latency-svc-nwcr8 [780.294892ms] Jan 6 17:59:28.741: INFO: Created: latency-svc-ltgx6 Jan 6 17:59:28.790: INFO: Got endpoints: latency-svc-ltgx6 [823.643901ms] Jan 6 17:59:28.803: INFO: Created: latency-svc-fph29 Jan 6 17:59:28.816: INFO: Got endpoints: latency-svc-fph29 [739.047376ms] Jan 6 17:59:28.839: INFO: Created: latency-svc-hhdlh Jan 6 17:59:28.852: INFO: Got endpoints: latency-svc-hhdlh [758.732886ms] Jan 6 17:59:28.878: INFO: Created: latency-svc-vjp6w Jan 6 17:59:28.951: INFO: Got endpoints: latency-svc-vjp6w [857.498548ms] Jan 6 17:59:28.962: INFO: Created: latency-svc-cxbws Jan 6 17:59:28.974: INFO: Got endpoints: latency-svc-cxbws [773.050866ms] Jan 6 17:59:28.995: INFO: Created: latency-svc-4tcph Jan 6 17:59:29.011: INFO: Got endpoints: latency-svc-4tcph [773.043674ms] Jan 6 17:59:29.031: INFO: Created: latency-svc-697xf Jan 6 17:59:29.046: INFO: Got endpoints: latency-svc-697xf [778.025676ms] Jan 6 17:59:29.095: INFO: Created: latency-svc-bz52h Jan 6 17:59:29.106: INFO: Got endpoints: latency-svc-bz52h [759.045883ms] Jan 6 17:59:29.137: INFO: Created: latency-svc-5nfnh Jan 6 17:59:29.157: INFO: Got endpoints: latency-svc-5nfnh [768.312076ms] Jan 6 17:59:29.185: INFO: Created: latency-svc-j6f9x Jan 6 17:59:29.226: INFO: Got endpoints: latency-svc-j6f9x [800.975321ms] Jan 6 17:59:29.253: INFO: Created: latency-svc-8pwhh Jan 6 17:59:29.269: INFO: Got endpoints: latency-svc-8pwhh [772.570014ms] Jan 6 17:59:29.289: INFO: Created: latency-svc-tgtpg Jan 6 17:59:29.306: INFO: Got endpoints: latency-svc-tgtpg [760.219528ms] Jan 6 17:59:29.371: INFO: Created: latency-svc-ctl6s Jan 6 17:59:29.374: INFO: Got endpoints: latency-svc-ctl6s [710.443788ms] Jan 6 17:59:29.422: INFO: Created: latency-svc-4dplq Jan 6 17:59:29.438: INFO: Got endpoints: latency-svc-4dplq [767.015335ms] Jan 6 17:59:29.517: INFO: Created: latency-svc-mrrzp Jan 6 17:59:29.542: INFO: Got endpoints: latency-svc-mrrzp [834.011576ms] Jan 6 17:59:29.574: INFO: Created: latency-svc-tqfr2 Jan 6 17:59:29.594: INFO: Got endpoints: latency-svc-tqfr2 [804.429666ms] Jan 6 17:59:29.658: INFO: Created: latency-svc-jf4c8 Jan 6 17:59:29.667: INFO: Got endpoints: latency-svc-jf4c8 [850.646747ms] Jan 6 17:59:29.685: INFO: Created: latency-svc-kvr8b Jan 6 17:59:29.697: INFO: Got endpoints: latency-svc-kvr8b [844.40152ms] Jan 6 17:59:29.721: INFO: Created: latency-svc-xxwsm Jan 6 17:59:29.733: INFO: Got endpoints: latency-svc-xxwsm [782.192077ms] Jan 6 17:59:29.789: INFO: Created: latency-svc-8w8m4 Jan 6 17:59:29.794: INFO: Got endpoints: latency-svc-8w8m4 [820.343575ms] Jan 6 17:59:29.848: INFO: Created: latency-svc-npk9s Jan 6 17:59:29.860: INFO: Got endpoints: latency-svc-npk9s [849.077053ms] Jan 6 17:59:29.883: INFO: Created: latency-svc-jmk6g Jan 6 17:59:29.963: INFO: Got endpoints: latency-svc-jmk6g [916.928604ms] Jan 6 17:59:29.965: INFO: Created: latency-svc-27qx7 Jan 6 17:59:29.974: INFO: Got endpoints: latency-svc-27qx7 [867.867418ms] Jan 6 17:59:29.995: INFO: Created: latency-svc-jpqvl Jan 6 17:59:30.005: INFO: Got endpoints: latency-svc-jpqvl [848.509863ms] Jan 6 17:59:30.036: INFO: Created: latency-svc-vqpbg Jan 6 17:59:30.047: INFO: Got endpoints: latency-svc-vqpbg [820.560483ms] Jan 6 17:59:30.131: INFO: Created: latency-svc-6cnbr Jan 6 17:59:30.153: INFO: Created: latency-svc-gjfdw Jan 6 17:59:30.153: INFO: Got endpoints: latency-svc-6cnbr [884.121919ms] Jan 6 17:59:30.186: INFO: Got endpoints: latency-svc-gjfdw [880.748276ms] Jan 6 17:59:30.217: INFO: Created: latency-svc-c8zm8 Jan 6 17:59:30.227: INFO: Got endpoints: latency-svc-c8zm8 [853.560856ms] Jan 6 17:59:30.273: INFO: Created: latency-svc-26rbf Jan 6 17:59:30.288: INFO: Got endpoints: latency-svc-26rbf [849.437627ms] Jan 6 17:59:30.322: INFO: Created: latency-svc-xpwtf Jan 6 17:59:30.336: INFO: Got endpoints: latency-svc-xpwtf [794.410296ms] Jan 6 17:59:30.336: INFO: Latencies: [75.158997ms 130.936032ms 146.242586ms 211.24389ms 228.000751ms 276.436169ms 385.095712ms 420.664726ms 469.89207ms 540.880192ms 583.254238ms 628.295393ms 709.274284ms 710.443788ms 723.238481ms 729.013144ms 739.047376ms 739.375963ms 741.620279ms 744.67624ms 749.143376ms 751.211163ms 756.644965ms 757.659611ms 758.732886ms 759.045883ms 760.219528ms 760.793216ms 762.190175ms 762.419646ms 764.002868ms 767.015335ms 768.312076ms 770.085793ms 771.634099ms 772.570014ms 773.043674ms 773.050866ms 778.025676ms 778.330875ms 779.144842ms 779.233738ms 779.55922ms 780.175031ms 780.294892ms 782.192077ms 783.556182ms 784.10854ms 784.569591ms 785.768577ms 787.429422ms 789.553206ms 790.436806ms 790.468732ms 794.410296ms 794.61185ms 796.128172ms 796.265465ms 798.4988ms 800.975321ms 802.232909ms 802.265164ms 804.429666ms 805.649533ms 807.108605ms 807.942092ms 808.894394ms 808.970215ms 812.495918ms 813.649503ms 814.1217ms 815.426748ms 817.005902ms 818.519418ms 820.343575ms 820.560483ms 823.643901ms 825.560584ms 825.694065ms 826.024663ms 827.537315ms 827.675482ms 827.878572ms 831.140566ms 833.11827ms 834.011576ms 836.441468ms 838.269064ms 839.239003ms 843.914336ms 843.95507ms 844.036752ms 844.40152ms 845.650867ms 848.509863ms 849.077053ms 849.437627ms 850.240364ms 850.646747ms 852.755622ms 853.560856ms 853.979061ms 857.498548ms 858.753828ms 859.2264ms 859.256787ms 859.294373ms 859.665512ms 861.364083ms 865.707786ms 866.791403ms 867.867418ms 870.241215ms 876.080219ms 878.093778ms 879.598936ms 880.064858ms 880.748276ms 880.75252ms 882.439921ms 882.538104ms 883.750766ms 884.121919ms 885.070284ms 886.149985ms 886.58535ms 890.630332ms 893.157302ms 895.409339ms 897.450807ms 898.528653ms 901.31968ms 903.43083ms 904.658113ms 905.644142ms 910.149303ms 912.925149ms 916.928604ms 920.644945ms 923.72826ms 924.872502ms 927.408772ms 927.491148ms 927.734338ms 931.660768ms 932.207487ms 932.921437ms 934.180258ms 938.525593ms 939.165289ms 939.568774ms 939.996277ms 940.302891ms 940.625272ms 940.931526ms 941.794865ms 944.923788ms 945.727048ms 953.32692ms 956.501797ms 959.247164ms 966.27632ms 972.286734ms 973.427673ms 975.364354ms 988.325844ms 990.281093ms 993.38396ms 995.092706ms 1.007443984s 1.041388213s 1.057851103s 1.066499669s 1.071108584s 1.103313463s 1.105982843s 1.120458689s 1.153484126s 1.191625065s 1.195081026s 1.203082245s 1.206041016s 1.239428093s 1.257112542s 1.26883687s 1.524223115s 1.550264679s 1.57358449s 1.58857884s 1.593029909s 1.598297247s 1.606830223s 1.616235397s 1.633006848s 1.653503585s 1.659528609s 1.725184248s 1.752496243s 1.760290341s 1.80036029s] Jan 6 17:59:30.337: INFO: 50 %ile: 853.560856ms Jan 6 17:59:30.337: INFO: 90 %ile: 1.203082245s Jan 6 17:59:30.337: INFO: 99 %ile: 1.760290341s Jan 6 17:59:30.337: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:59:30.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-4chpl" for this suite. Jan 6 17:59:54.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 17:59:54.366: INFO: namespace: e2e-tests-svc-latency-4chpl, resource: bindings, ignored listing per whitelist Jan 6 17:59:54.441: INFO: namespace e2e-tests-svc-latency-4chpl deletion completed in 24.097977344s • [SLOW TEST:40.837 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 17:59:54.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-fb1575c7-5048-11eb-8655-0242ac110009 STEP: Creating a pod to test consume secrets Jan 6 17:59:54.550: INFO: Waiting up to 5m0s for pod "pod-secrets-fb1761bc-5048-11eb-8655-0242ac110009" in namespace "e2e-tests-secrets-2pmgv" to be "success or failure" Jan 6 17:59:54.605: INFO: Pod "pod-secrets-fb1761bc-5048-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 55.049294ms Jan 6 17:59:56.609: INFO: Pod "pod-secrets-fb1761bc-5048-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059453575s Jan 6 17:59:58.614: INFO: Pod "pod-secrets-fb1761bc-5048-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064018186s STEP: Saw pod success Jan 6 17:59:58.614: INFO: Pod "pod-secrets-fb1761bc-5048-11eb-8655-0242ac110009" satisfied condition "success or failure" Jan 6 17:59:58.617: INFO: Trying to get logs from node hunter-worker pod pod-secrets-fb1761bc-5048-11eb-8655-0242ac110009 container secret-volume-test: STEP: delete the pod Jan 6 17:59:58.641: INFO: Waiting for pod pod-secrets-fb1761bc-5048-11eb-8655-0242ac110009 to disappear Jan 6 17:59:58.644: INFO: Pod pod-secrets-fb1761bc-5048-11eb-8655-0242ac110009 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 17:59:58.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-2pmgv" for this suite. Jan 6 18:00:04.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 18:00:04.704: INFO: namespace: e2e-tests-secrets-2pmgv, resource: bindings, ignored listing per whitelist Jan 6 18:00:04.785: INFO: namespace e2e-tests-secrets-2pmgv deletion completed in 6.138083028s • [SLOW TEST:10.344 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 18:00:04.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 6 18:00:04.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-xv295' Jan 6 18:00:07.337: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 6 18:00:07.337: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Jan 6 18:00:11.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-xv295' Jan 6 18:00:11.504: INFO: stderr: "" Jan 6 18:00:11.504: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 6 18:00:11.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xv295" for this suite. Jan 6 18:00:29.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 18:00:29.602: INFO: namespace: e2e-tests-kubectl-xv295, resource: bindings, ignored listing per whitelist Jan 6 18:00:29.614: INFO: namespace e2e-tests-kubectl-xv295 deletion completed in 18.101137285s • [SLOW TEST:24.829 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 6 18:00:29.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 6 18:00:29.735: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  6 18:00:36.023: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13d06a41-5049-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-wjsjg" to be "success or failure"
Jan  6 18:00:36.040: INFO: Pod "downwardapi-volume-13d06a41-5049-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.808548ms
Jan  6 18:00:38.045: INFO: Pod "downwardapi-volume-13d06a41-5049-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021318547s
Jan  6 18:00:40.049: INFO: Pod "downwardapi-volume-13d06a41-5049-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025637644s
STEP: Saw pod success
Jan  6 18:00:40.049: INFO: Pod "downwardapi-volume-13d06a41-5049-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:00:40.053: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-13d06a41-5049-11eb-8655-0242ac110009 container client-container: 
STEP: delete the pod
Jan  6 18:00:40.076: INFO: Waiting for pod downwardapi-volume-13d06a41-5049-11eb-8655-0242ac110009 to disappear
Jan  6 18:00:40.081: INFO: Pod downwardapi-volume-13d06a41-5049-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:00:40.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wjsjg" for this suite.
Jan  6 18:00:46.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:00:46.175: INFO: namespace: e2e-tests-projected-wjsjg, resource: bindings, ignored listing per whitelist
Jan  6 18:00:46.204: INFO: namespace e2e-tests-projected-wjsjg deletion completed in 6.119261718s

• [SLOW TEST:10.259 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:00:46.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-19f0bfcc-5049-11eb-8655-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  6 18:00:46.317: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-19f29071-5049-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-m6kdv" to be "success or failure"
Jan  6 18:00:46.321: INFO: Pod "pod-projected-configmaps-19f29071-5049-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.989127ms
Jan  6 18:00:48.325: INFO: Pod "pod-projected-configmaps-19f29071-5049-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008085195s
Jan  6 18:00:50.329: INFO: Pod "pod-projected-configmaps-19f29071-5049-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012646514s
STEP: Saw pod success
Jan  6 18:00:50.329: INFO: Pod "pod-projected-configmaps-19f29071-5049-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:00:50.332: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-19f29071-5049-11eb-8655-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  6 18:00:50.414: INFO: Waiting for pod pod-projected-configmaps-19f29071-5049-11eb-8655-0242ac110009 to disappear
Jan  6 18:00:50.443: INFO: Pod pod-projected-configmaps-19f29071-5049-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:00:50.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-m6kdv" for this suite.
Jan  6 18:00:56.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:00:56.549: INFO: namespace: e2e-tests-projected-m6kdv, resource: bindings, ignored listing per whitelist
Jan  6 18:00:56.564: INFO: namespace e2e-tests-projected-m6kdv deletion completed in 6.116787805s

• [SLOW TEST:10.359 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:00:56.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan  6 18:00:56.690: INFO: Waiting up to 5m0s for pod "pod-201eeb65-5049-11eb-8655-0242ac110009" in namespace "e2e-tests-emptydir-sxqq8" to be "success or failure"
Jan  6 18:00:56.959: INFO: Pod "pod-201eeb65-5049-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 268.941585ms
Jan  6 18:00:58.963: INFO: Pod "pod-201eeb65-5049-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.273120842s
Jan  6 18:01:00.968: INFO: Pod "pod-201eeb65-5049-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.277645241s
STEP: Saw pod success
Jan  6 18:01:00.968: INFO: Pod "pod-201eeb65-5049-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:01:00.971: INFO: Trying to get logs from node hunter-worker pod pod-201eeb65-5049-11eb-8655-0242ac110009 container test-container: 
STEP: delete the pod
Jan  6 18:01:01.018: INFO: Waiting for pod pod-201eeb65-5049-11eb-8655-0242ac110009 to disappear
Jan  6 18:01:01.021: INFO: Pod pod-201eeb65-5049-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:01:01.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-sxqq8" for this suite.
Jan  6 18:01:07.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:01:07.116: INFO: namespace: e2e-tests-emptydir-sxqq8, resource: bindings, ignored listing per whitelist
Jan  6 18:01:07.179: INFO: namespace e2e-tests-emptydir-sxqq8 deletion completed in 6.152808158s

• [SLOW TEST:10.615 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:01:07.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan  6 18:01:07.286: INFO: namespace e2e-tests-kubectl-mbdch
Jan  6 18:01:07.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mbdch'
Jan  6 18:01:07.544: INFO: stderr: ""
Jan  6 18:01:07.544: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  6 18:01:08.623: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 18:01:08.623: INFO: Found 0 / 1
Jan  6 18:01:09.548: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 18:01:09.548: INFO: Found 0 / 1
Jan  6 18:01:10.557: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 18:01:10.557: INFO: Found 0 / 1
Jan  6 18:01:11.549: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 18:01:11.549: INFO: Found 1 / 1
Jan  6 18:01:11.549: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  6 18:01:11.552: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 18:01:11.552: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  6 18:01:11.552: INFO: wait on redis-master startup in e2e-tests-kubectl-mbdch 
Jan  6 18:01:11.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-k7fcw redis-master --namespace=e2e-tests-kubectl-mbdch'
Jan  6 18:01:11.670: INFO: stderr: ""
Jan  6 18:01:11.670: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 06 Jan 18:01:10.502 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 Jan 18:01:10.502 # Server started, Redis version 3.2.12\n1:M 06 Jan 18:01:10.502 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 Jan 18:01:10.503 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan  6 18:01:11.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-mbdch'
Jan  6 18:01:11.837: INFO: stderr: ""
Jan  6 18:01:11.837: INFO: stdout: "service/rm2 exposed\n"
Jan  6 18:01:11.861: INFO: Service rm2 in namespace e2e-tests-kubectl-mbdch found.
STEP: exposing service
Jan  6 18:01:13.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-mbdch'
Jan  6 18:01:14.054: INFO: stderr: ""
Jan  6 18:01:14.054: INFO: stdout: "service/rm3 exposed\n"
Jan  6 18:01:14.059: INFO: Service rm3 in namespace e2e-tests-kubectl-mbdch found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:01:16.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mbdch" for this suite.
Jan  6 18:01:40.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:01:40.131: INFO: namespace: e2e-tests-kubectl-mbdch, resource: bindings, ignored listing per whitelist
Jan  6 18:01:40.181: INFO: namespace e2e-tests-kubectl-mbdch deletion completed in 24.110380542s

• [SLOW TEST:33.002 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:01:40.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan  6 18:01:40.319: INFO: Waiting up to 5m0s for pod "pod-3a230331-5049-11eb-8655-0242ac110009" in namespace "e2e-tests-emptydir-665wf" to be "success or failure"
Jan  6 18:01:40.334: INFO: Pod "pod-3a230331-5049-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 14.272443ms
Jan  6 18:01:42.338: INFO: Pod "pod-3a230331-5049-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018790911s
Jan  6 18:01:44.342: INFO: Pod "pod-3a230331-5049-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023032314s
STEP: Saw pod success
Jan  6 18:01:44.343: INFO: Pod "pod-3a230331-5049-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:01:44.345: INFO: Trying to get logs from node hunter-worker2 pod pod-3a230331-5049-11eb-8655-0242ac110009 container test-container: 
STEP: delete the pod
Jan  6 18:01:44.392: INFO: Waiting for pod pod-3a230331-5049-11eb-8655-0242ac110009 to disappear
Jan  6 18:01:44.414: INFO: Pod pod-3a230331-5049-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:01:44.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-665wf" for this suite.
Jan  6 18:01:50.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:01:50.553: INFO: namespace: e2e-tests-emptydir-665wf, resource: bindings, ignored listing per whitelist
Jan  6 18:01:50.574: INFO: namespace e2e-tests-emptydir-665wf deletion completed in 6.157327447s

• [SLOW TEST:10.392 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:01:50.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan  6 18:01:50.770: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-qsbgt,SelfLink:/api/v1/namespaces/e2e-tests-watch-qsbgt/configmaps/e2e-watch-test-resource-version,UID:4053a777-5049-11eb-8302-0242ac120002,ResourceVersion:18058448,Generation:0,CreationTimestamp:2021-01-06 18:01:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  6 18:01:50.770: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-qsbgt,SelfLink:/api/v1/namespaces/e2e-tests-watch-qsbgt/configmaps/e2e-watch-test-resource-version,UID:4053a777-5049-11eb-8302-0242ac120002,ResourceVersion:18058449,Generation:0,CreationTimestamp:2021-01-06 18:01:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:01:50.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-qsbgt" for this suite.
Jan  6 18:01:56.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:01:56.901: INFO: namespace: e2e-tests-watch-qsbgt, resource: bindings, ignored listing per whitelist
Jan  6 18:01:56.935: INFO: namespace e2e-tests-watch-qsbgt deletion completed in 6.148500036s

• [SLOW TEST:6.360 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:01:56.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jan  6 18:01:57.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8vmb9'
Jan  6 18:01:57.311: INFO: stderr: ""
Jan  6 18:01:57.311: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  6 18:01:57.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8vmb9'
Jan  6 18:01:57.417: INFO: stderr: ""
Jan  6 18:01:57.417: INFO: stdout: "update-demo-nautilus-6fjrv update-demo-nautilus-wlf94 "
Jan  6 18:01:57.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6fjrv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8vmb9'
Jan  6 18:01:57.514: INFO: stderr: ""
Jan  6 18:01:57.514: INFO: stdout: ""
Jan  6 18:01:57.514: INFO: update-demo-nautilus-6fjrv is created but not running
Jan  6 18:02:02.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8vmb9'
Jan  6 18:02:02.622: INFO: stderr: ""
Jan  6 18:02:02.622: INFO: stdout: "update-demo-nautilus-6fjrv update-demo-nautilus-wlf94 "
Jan  6 18:02:02.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6fjrv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8vmb9'
Jan  6 18:02:02.730: INFO: stderr: ""
Jan  6 18:02:02.730: INFO: stdout: "true"
Jan  6 18:02:02.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6fjrv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8vmb9'
Jan  6 18:02:02.840: INFO: stderr: ""
Jan  6 18:02:02.840: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  6 18:02:02.840: INFO: validating pod update-demo-nautilus-6fjrv
Jan  6 18:02:02.847: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  6 18:02:02.847: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  6 18:02:02.847: INFO: update-demo-nautilus-6fjrv is verified up and running
Jan  6 18:02:02.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wlf94 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8vmb9'
Jan  6 18:02:02.972: INFO: stderr: ""
Jan  6 18:02:02.972: INFO: stdout: "true"
Jan  6 18:02:02.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wlf94 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8vmb9'
Jan  6 18:02:03.059: INFO: stderr: ""
Jan  6 18:02:03.059: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  6 18:02:03.059: INFO: validating pod update-demo-nautilus-wlf94
Jan  6 18:02:03.062: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  6 18:02:03.062: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  6 18:02:03.062: INFO: update-demo-nautilus-wlf94 is verified up and running
STEP: rolling-update to new replication controller
Jan  6 18:02:03.064: INFO: scanned /root for discovery docs: 
Jan  6 18:02:03.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-8vmb9'
Jan  6 18:02:25.805: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  6 18:02:25.805: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  6 18:02:25.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8vmb9'
Jan  6 18:02:25.911: INFO: stderr: ""
Jan  6 18:02:25.911: INFO: stdout: "update-demo-kitten-88qmk update-demo-kitten-8m7j8 "
Jan  6 18:02:25.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-88qmk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8vmb9'
Jan  6 18:02:26.019: INFO: stderr: ""
Jan  6 18:02:26.019: INFO: stdout: "true"
Jan  6 18:02:26.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-88qmk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8vmb9'
Jan  6 18:02:26.129: INFO: stderr: ""
Jan  6 18:02:26.129: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  6 18:02:26.129: INFO: validating pod update-demo-kitten-88qmk
Jan  6 18:02:26.133: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  6 18:02:26.133: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  6 18:02:26.133: INFO: update-demo-kitten-88qmk is verified up and running
Jan  6 18:02:26.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8m7j8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8vmb9'
Jan  6 18:02:26.225: INFO: stderr: ""
Jan  6 18:02:26.225: INFO: stdout: "true"
Jan  6 18:02:26.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8m7j8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8vmb9'
Jan  6 18:02:26.320: INFO: stderr: ""
Jan  6 18:02:26.320: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  6 18:02:26.320: INFO: validating pod update-demo-kitten-8m7j8
Jan  6 18:02:26.323: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  6 18:02:26.323: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  6 18:02:26.323: INFO: update-demo-kitten-8m7j8 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:02:26.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8vmb9" for this suite.
Jan  6 18:02:48.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:02:48.417: INFO: namespace: e2e-tests-kubectl-8vmb9, resource: bindings, ignored listing per whitelist
Jan  6 18:02:48.426: INFO: namespace e2e-tests-kubectl-8vmb9 deletion completed in 22.100315638s

• [SLOW TEST:51.491 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:02:48.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jan  6 18:02:49.070: INFO: created pod pod-service-account-defaultsa
Jan  6 18:02:49.070: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan  6 18:02:49.073: INFO: created pod pod-service-account-mountsa
Jan  6 18:02:49.073: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan  6 18:02:49.095: INFO: created pod pod-service-account-nomountsa
Jan  6 18:02:49.095: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan  6 18:02:49.110: INFO: created pod pod-service-account-defaultsa-mountspec
Jan  6 18:02:49.110: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan  6 18:02:49.146: INFO: created pod pod-service-account-mountsa-mountspec
Jan  6 18:02:49.146: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan  6 18:02:49.194: INFO: created pod pod-service-account-nomountsa-mountspec
Jan  6 18:02:49.194: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan  6 18:02:49.247: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan  6 18:02:49.247: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan  6 18:02:49.289: INFO: created pod pod-service-account-mountsa-nomountspec
Jan  6 18:02:49.289: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan  6 18:02:49.326: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan  6 18:02:49.326: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:02:49.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-hvhm7" for this suite.
Jan  6 18:03:19.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:03:19.462: INFO: namespace: e2e-tests-svcaccounts-hvhm7, resource: bindings, ignored listing per whitelist
Jan  6 18:03:19.498: INFO: namespace e2e-tests-svcaccounts-hvhm7 deletion completed in 30.104777492s

• [SLOW TEST:31.072 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:03:19.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan  6 18:03:19.601: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7jskd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7jskd/configmaps/e2e-watch-test-configmap-a,UID:7550500b-5049-11eb-8302-0242ac120002,ResourceVersion:18058854,Generation:0,CreationTimestamp:2021-01-06 18:03:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  6 18:03:19.602: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7jskd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7jskd/configmaps/e2e-watch-test-configmap-a,UID:7550500b-5049-11eb-8302-0242ac120002,ResourceVersion:18058854,Generation:0,CreationTimestamp:2021-01-06 18:03:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan  6 18:03:29.609: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7jskd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7jskd/configmaps/e2e-watch-test-configmap-a,UID:7550500b-5049-11eb-8302-0242ac120002,ResourceVersion:18058874,Generation:0,CreationTimestamp:2021-01-06 18:03:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  6 18:03:29.609: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7jskd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7jskd/configmaps/e2e-watch-test-configmap-a,UID:7550500b-5049-11eb-8302-0242ac120002,ResourceVersion:18058874,Generation:0,CreationTimestamp:2021-01-06 18:03:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan  6 18:03:39.617: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7jskd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7jskd/configmaps/e2e-watch-test-configmap-a,UID:7550500b-5049-11eb-8302-0242ac120002,ResourceVersion:18058894,Generation:0,CreationTimestamp:2021-01-06 18:03:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  6 18:03:39.618: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7jskd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7jskd/configmaps/e2e-watch-test-configmap-a,UID:7550500b-5049-11eb-8302-0242ac120002,ResourceVersion:18058894,Generation:0,CreationTimestamp:2021-01-06 18:03:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan  6 18:03:49.625: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7jskd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7jskd/configmaps/e2e-watch-test-configmap-a,UID:7550500b-5049-11eb-8302-0242ac120002,ResourceVersion:18058914,Generation:0,CreationTimestamp:2021-01-06 18:03:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  6 18:03:49.625: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7jskd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7jskd/configmaps/e2e-watch-test-configmap-a,UID:7550500b-5049-11eb-8302-0242ac120002,ResourceVersion:18058914,Generation:0,CreationTimestamp:2021-01-06 18:03:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan  6 18:03:59.633: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7jskd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7jskd/configmaps/e2e-watch-test-configmap-b,UID:8d2c8a32-5049-11eb-8302-0242ac120002,ResourceVersion:18058934,Generation:0,CreationTimestamp:2021-01-06 18:03:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  6 18:03:59.633: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7jskd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7jskd/configmaps/e2e-watch-test-configmap-b,UID:8d2c8a32-5049-11eb-8302-0242ac120002,ResourceVersion:18058934,Generation:0,CreationTimestamp:2021-01-06 18:03:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan  6 18:04:09.639: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7jskd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7jskd/configmaps/e2e-watch-test-configmap-b,UID:8d2c8a32-5049-11eb-8302-0242ac120002,ResourceVersion:18058954,Generation:0,CreationTimestamp:2021-01-06 18:03:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  6 18:04:09.640: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7jskd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7jskd/configmaps/e2e-watch-test-configmap-b,UID:8d2c8a32-5049-11eb-8302-0242ac120002,ResourceVersion:18058954,Generation:0,CreationTimestamp:2021-01-06 18:03:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:04:19.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-7jskd" for this suite.
Jan  6 18:04:25.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:04:25.702: INFO: namespace: e2e-tests-watch-7jskd, resource: bindings, ignored listing per whitelist
Jan  6 18:04:25.749: INFO: namespace e2e-tests-watch-7jskd deletion completed in 6.10403399s

• [SLOW TEST:66.251 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:04:25.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-9cccbf10-5049-11eb-8655-0242ac110009
STEP: Creating a pod to test consume secrets
Jan  6 18:04:25.947: INFO: Waiting up to 5m0s for pod "pod-secrets-9cdb56a8-5049-11eb-8655-0242ac110009" in namespace "e2e-tests-secrets-q5kbz" to be "success or failure"
Jan  6 18:04:25.950: INFO: Pod "pod-secrets-9cdb56a8-5049-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.91688ms
Jan  6 18:04:27.954: INFO: Pod "pod-secrets-9cdb56a8-5049-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007398987s
Jan  6 18:04:29.958: INFO: Pod "pod-secrets-9cdb56a8-5049-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01139195s
STEP: Saw pod success
Jan  6 18:04:29.958: INFO: Pod "pod-secrets-9cdb56a8-5049-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:04:29.961: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-9cdb56a8-5049-11eb-8655-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Jan  6 18:04:29.982: INFO: Waiting for pod pod-secrets-9cdb56a8-5049-11eb-8655-0242ac110009 to disappear
Jan  6 18:04:29.991: INFO: Pod pod-secrets-9cdb56a8-5049-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:04:29.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-q5kbz" for this suite.
Jan  6 18:04:36.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:04:36.052: INFO: namespace: e2e-tests-secrets-q5kbz, resource: bindings, ignored listing per whitelist
Jan  6 18:04:36.094: INFO: namespace e2e-tests-secrets-q5kbz deletion completed in 6.09945191s
STEP: Destroying namespace "e2e-tests-secret-namespace-4x24z" for this suite.
Jan  6 18:04:42.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:04:42.151: INFO: namespace: e2e-tests-secret-namespace-4x24z, resource: bindings, ignored listing per whitelist
Jan  6 18:04:42.207: INFO: namespace e2e-tests-secret-namespace-4x24z deletion completed in 6.112536526s

• [SLOW TEST:16.458 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:04:42.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-a69be6d1-5049-11eb-8655-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  6 18:04:42.337: INFO: Waiting up to 5m0s for pod "pod-configmaps-a69e6cac-5049-11eb-8655-0242ac110009" in namespace "e2e-tests-configmap-86dqg" to be "success or failure"
Jan  6 18:04:42.346: INFO: Pod "pod-configmaps-a69e6cac-5049-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 9.011946ms
Jan  6 18:04:44.352: INFO: Pod "pod-configmaps-a69e6cac-5049-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014711332s
Jan  6 18:04:46.358: INFO: Pod "pod-configmaps-a69e6cac-5049-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02128284s
STEP: Saw pod success
Jan  6 18:04:46.358: INFO: Pod "pod-configmaps-a69e6cac-5049-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:04:46.380: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-a69e6cac-5049-11eb-8655-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Jan  6 18:04:46.444: INFO: Waiting for pod pod-configmaps-a69e6cac-5049-11eb-8655-0242ac110009 to disappear
Jan  6 18:04:46.640: INFO: Pod pod-configmaps-a69e6cac-5049-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:04:46.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-86dqg" for this suite.
Jan  6 18:04:52.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:04:52.760: INFO: namespace: e2e-tests-configmap-86dqg, resource: bindings, ignored listing per whitelist
Jan  6 18:04:52.800: INFO: namespace e2e-tests-configmap-86dqg deletion completed in 6.155831726s

• [SLOW TEST:10.593 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:04:52.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-acebf355-5049-11eb-8655-0242ac110009
STEP: Creating a pod to test consume secrets
Jan  6 18:04:52.911: INFO: Waiting up to 5m0s for pod "pod-secrets-acedd88e-5049-11eb-8655-0242ac110009" in namespace "e2e-tests-secrets-sp6pb" to be "success or failure"
Jan  6 18:04:52.915: INFO: Pod "pod-secrets-acedd88e-5049-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.823888ms
Jan  6 18:04:54.918: INFO: Pod "pod-secrets-acedd88e-5049-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007073108s
Jan  6 18:04:56.922: INFO: Pod "pod-secrets-acedd88e-5049-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010962958s
STEP: Saw pod success
Jan  6 18:04:56.922: INFO: Pod "pod-secrets-acedd88e-5049-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:04:56.925: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-acedd88e-5049-11eb-8655-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Jan  6 18:04:56.946: INFO: Waiting for pod pod-secrets-acedd88e-5049-11eb-8655-0242ac110009 to disappear
Jan  6 18:04:56.950: INFO: Pod pod-secrets-acedd88e-5049-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:04:56.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-sp6pb" for this suite.
Jan  6 18:05:02.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:05:03.063: INFO: namespace: e2e-tests-secrets-sp6pb, resource: bindings, ignored listing per whitelist
Jan  6 18:05:03.063: INFO: namespace e2e-tests-secrets-sp6pb deletion completed in 6.108766718s

• [SLOW TEST:10.262 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:05:03.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-b30eb6f6-5049-11eb-8655-0242ac110009
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-b30eb6f6-5049-11eb-8655-0242ac110009
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:05:09.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pk4ts" for this suite.
Jan  6 18:05:29.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:05:29.310: INFO: namespace: e2e-tests-projected-pk4ts, resource: bindings, ignored listing per whitelist
Jan  6 18:05:29.387: INFO: namespace e2e-tests-projected-pk4ts deletion completed in 20.12955581s

• [SLOW TEST:26.324 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:05:29.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-jvbx4/configmap-test-c2bae892-5049-11eb-8655-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  6 18:05:29.518: INFO: Waiting up to 5m0s for pod "pod-configmaps-c2be4553-5049-11eb-8655-0242ac110009" in namespace "e2e-tests-configmap-jvbx4" to be "success or failure"
Jan  6 18:05:29.538: INFO: Pod "pod-configmaps-c2be4553-5049-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 19.305902ms
Jan  6 18:05:31.544: INFO: Pod "pod-configmaps-c2be4553-5049-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025414567s
Jan  6 18:05:33.548: INFO: Pod "pod-configmaps-c2be4553-5049-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02948494s
STEP: Saw pod success
Jan  6 18:05:33.548: INFO: Pod "pod-configmaps-c2be4553-5049-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:05:33.586: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-c2be4553-5049-11eb-8655-0242ac110009 container env-test: 
STEP: delete the pod
Jan  6 18:05:33.676: INFO: Waiting for pod pod-configmaps-c2be4553-5049-11eb-8655-0242ac110009 to disappear
Jan  6 18:05:33.730: INFO: Pod pod-configmaps-c2be4553-5049-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:05:33.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jvbx4" for this suite.
Jan  6 18:05:39.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:05:39.886: INFO: namespace: e2e-tests-configmap-jvbx4, resource: bindings, ignored listing per whitelist
Jan  6 18:05:39.943: INFO: namespace e2e-tests-configmap-jvbx4 deletion completed in 6.210141163s

• [SLOW TEST:10.556 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:05:39.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-c90896fb-5049-11eb-8655-0242ac110009
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:05:46.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-8b5gz" for this suite.
Jan  6 18:06:08.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:06:08.221: INFO: namespace: e2e-tests-configmap-8b5gz, resource: bindings, ignored listing per whitelist
Jan  6 18:06:08.247: INFO: namespace e2e-tests-configmap-8b5gz deletion completed in 22.123378966s

• [SLOW TEST:28.303 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:06:08.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-d9ec2eac-5049-11eb-8655-0242ac110009
STEP: Creating a pod to test consume secrets
Jan  6 18:06:08.408: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d9ede629-5049-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-bt69x" to be "success or failure"
Jan  6 18:06:08.424: INFO: Pod "pod-projected-secrets-d9ede629-5049-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 15.795913ms
Jan  6 18:06:10.428: INFO: Pod "pod-projected-secrets-d9ede629-5049-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019874733s
Jan  6 18:06:12.455: INFO: Pod "pod-projected-secrets-d9ede629-5049-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046339665s
STEP: Saw pod success
Jan  6 18:06:12.455: INFO: Pod "pod-projected-secrets-d9ede629-5049-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:06:12.458: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-d9ede629-5049-11eb-8655-0242ac110009 container projected-secret-volume-test: 
STEP: delete the pod
Jan  6 18:06:12.481: INFO: Waiting for pod pod-projected-secrets-d9ede629-5049-11eb-8655-0242ac110009 to disappear
Jan  6 18:06:12.544: INFO: Pod pod-projected-secrets-d9ede629-5049-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:06:12.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bt69x" for this suite.
Jan  6 18:06:18.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:06:18.617: INFO: namespace: e2e-tests-projected-bt69x, resource: bindings, ignored listing per whitelist
Jan  6 18:06:18.662: INFO: namespace e2e-tests-projected-bt69x deletion completed in 6.113941062s

• [SLOW TEST:10.414 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:06:18.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-dxtt
STEP: Creating a pod to test atomic-volume-subpath
Jan  6 18:06:18.787: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dxtt" in namespace "e2e-tests-subpath-9gzzl" to be "success or failure"
Jan  6 18:06:18.813: INFO: Pod "pod-subpath-test-configmap-dxtt": Phase="Pending", Reason="", readiness=false. Elapsed: 25.007407ms
Jan  6 18:06:20.817: INFO: Pod "pod-subpath-test-configmap-dxtt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029310915s
Jan  6 18:06:22.820: INFO: Pod "pod-subpath-test-configmap-dxtt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032645811s
Jan  6 18:06:24.861: INFO: Pod "pod-subpath-test-configmap-dxtt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073950855s
Jan  6 18:06:26.866: INFO: Pod "pod-subpath-test-configmap-dxtt": Phase="Running", Reason="", readiness=false. Elapsed: 8.078659425s
Jan  6 18:06:28.871: INFO: Pod "pod-subpath-test-configmap-dxtt": Phase="Running", Reason="", readiness=false. Elapsed: 10.083009253s
Jan  6 18:06:30.874: INFO: Pod "pod-subpath-test-configmap-dxtt": Phase="Running", Reason="", readiness=false. Elapsed: 12.0865076s
Jan  6 18:06:32.878: INFO: Pod "pod-subpath-test-configmap-dxtt": Phase="Running", Reason="", readiness=false. Elapsed: 14.090742131s
Jan  6 18:06:34.882: INFO: Pod "pod-subpath-test-configmap-dxtt": Phase="Running", Reason="", readiness=false. Elapsed: 16.094195516s
Jan  6 18:06:36.886: INFO: Pod "pod-subpath-test-configmap-dxtt": Phase="Running", Reason="", readiness=false. Elapsed: 18.098312669s
Jan  6 18:06:38.889: INFO: Pod "pod-subpath-test-configmap-dxtt": Phase="Running", Reason="", readiness=false. Elapsed: 20.101962796s
Jan  6 18:06:40.893: INFO: Pod "pod-subpath-test-configmap-dxtt": Phase="Running", Reason="", readiness=false. Elapsed: 22.105000733s
Jan  6 18:06:42.896: INFO: Pod "pod-subpath-test-configmap-dxtt": Phase="Running", Reason="", readiness=false. Elapsed: 24.108980677s
Jan  6 18:06:45.000: INFO: Pod "pod-subpath-test-configmap-dxtt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.212169178s
STEP: Saw pod success
Jan  6 18:06:45.000: INFO: Pod "pod-subpath-test-configmap-dxtt" satisfied condition "success or failure"
Jan  6 18:06:45.003: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-dxtt container test-container-subpath-configmap-dxtt: 
STEP: delete the pod
Jan  6 18:06:45.151: INFO: Waiting for pod pod-subpath-test-configmap-dxtt to disappear
Jan  6 18:06:45.163: INFO: Pod pod-subpath-test-configmap-dxtt no longer exists
STEP: Deleting pod pod-subpath-test-configmap-dxtt
Jan  6 18:06:45.163: INFO: Deleting pod "pod-subpath-test-configmap-dxtt" in namespace "e2e-tests-subpath-9gzzl"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:06:45.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-9gzzl" for this suite.
Jan  6 18:06:51.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:06:51.188: INFO: namespace: e2e-tests-subpath-9gzzl, resource: bindings, ignored listing per whitelist
Jan  6 18:06:51.310: INFO: namespace e2e-tests-subpath-9gzzl deletion completed in 6.14139526s

• [SLOW TEST:32.648 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:06:51.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-fqv7g A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-fqv7g;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-fqv7g A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-fqv7g;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-fqv7g.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-fqv7g.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-fqv7g.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-fqv7g.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-fqv7g.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-fqv7g.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-fqv7g.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-fqv7g.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-fqv7g.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 106.99.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.99.106_udp@PTR;check="$$(dig +tcp +noall +answer +search 106.99.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.99.106_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-fqv7g A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-fqv7g;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-fqv7g A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-fqv7g.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-fqv7g.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-fqv7g.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-fqv7g.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-fqv7g.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-fqv7g.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-fqv7g.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-fqv7g.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 106.99.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.99.106_udp@PTR;check="$$(dig +tcp +noall +answer +search 106.99.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.99.106_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  6 18:06:59.600: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-fqv7g from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:06:59.608: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:06:59.610: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:06:59.625: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:06:59.627: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:06:59.629: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-fqv7g from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:06:59.631: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:06:59.634: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:06:59.637: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:06:59.639: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:06:59.641: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:06:59.658: INFO: Lookups using e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009 failed for: [wheezy_udp@dns-test-service.e2e-tests-dns-fqv7g wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-fqv7g jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g jessie_udp@dns-test-service.e2e-tests-dns-fqv7g.svc jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc]

Jan  6 18:07:04.671: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-fqv7g from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:04.685: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:04.688: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:04.732: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:04.735: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:04.739: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-fqv7g from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:04.741: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:04.744: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:04.747: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:04.756: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:04.760: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:04.780: INFO: Lookups using e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009 failed for: [wheezy_udp@dns-test-service.e2e-tests-dns-fqv7g wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-fqv7g jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g jessie_udp@dns-test-service.e2e-tests-dns-fqv7g.svc jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc]

Jan  6 18:07:09.673: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-fqv7g from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:09.690: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:09.692: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:09.709: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:09.710: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:09.713: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-fqv7g from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:09.716: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:09.718: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:09.721: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:09.723: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:09.726: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:09.773: INFO: Lookups using e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009 failed for: [wheezy_udp@dns-test-service.e2e-tests-dns-fqv7g wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-fqv7g jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g jessie_udp@dns-test-service.e2e-tests-dns-fqv7g.svc jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc]

Jan  6 18:07:14.670: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-fqv7g from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:14.683: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:14.707: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:14.724: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:14.727: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:14.729: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-fqv7g from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:14.731: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:14.734: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:14.737: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:14.739: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:14.742: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:14.760: INFO: Lookups using e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009 failed for: [wheezy_udp@dns-test-service.e2e-tests-dns-fqv7g wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-fqv7g jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g jessie_udp@dns-test-service.e2e-tests-dns-fqv7g.svc jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc]

Jan  6 18:07:19.672: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-fqv7g from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:19.688: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:19.691: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:19.715: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:19.718: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:19.720: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-fqv7g from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:19.723: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:19.726: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:19.729: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:19.733: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:19.748: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:19.764: INFO: Lookups using e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009 failed for: [wheezy_udp@dns-test-service.e2e-tests-dns-fqv7g wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-fqv7g jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g jessie_udp@dns-test-service.e2e-tests-dns-fqv7g.svc jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc]

Jan  6 18:07:24.669: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-fqv7g from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:24.682: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:24.685: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:24.707: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:24.710: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:24.737: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-fqv7g from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:24.741: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:24.744: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:24.747: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:24.751: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:24.755: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc from pod e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009: the server could not find the requested resource (get pods dns-test-f39aa21b-5049-11eb-8655-0242ac110009)
Jan  6 18:07:24.774: INFO: Lookups using e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009 failed for: [wheezy_udp@dns-test-service.e2e-tests-dns-fqv7g wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-fqv7g jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g jessie_udp@dns-test-service.e2e-tests-dns-fqv7g.svc jessie_tcp@dns-test-service.e2e-tests-dns-fqv7g.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fqv7g.svc]

Jan  6 18:07:29.779: INFO: DNS probes using e2e-tests-dns-fqv7g/dns-test-f39aa21b-5049-11eb-8655-0242ac110009 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:07:30.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-fqv7g" for this suite.
Jan  6 18:07:37.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:07:37.079: INFO: namespace: e2e-tests-dns-fqv7g, resource: bindings, ignored listing per whitelist
Jan  6 18:07:37.151: INFO: namespace e2e-tests-dns-fqv7g deletion completed in 6.149701555s

• [SLOW TEST:45.840 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:07:37.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  6 18:07:41.843: INFO: Successfully updated pod "pod-update-activedeadlineseconds-0ee7ed5a-504a-11eb-8655-0242ac110009"
Jan  6 18:07:41.844: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-0ee7ed5a-504a-11eb-8655-0242ac110009" in namespace "e2e-tests-pods-wrdgs" to be "terminated due to deadline exceeded"
Jan  6 18:07:41.884: INFO: Pod "pod-update-activedeadlineseconds-0ee7ed5a-504a-11eb-8655-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 40.417739ms
Jan  6 18:07:43.888: INFO: Pod "pod-update-activedeadlineseconds-0ee7ed5a-504a-11eb-8655-0242ac110009": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.044576965s
Jan  6 18:07:43.888: INFO: Pod "pod-update-activedeadlineseconds-0ee7ed5a-504a-11eb-8655-0242ac110009" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:07:43.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-wrdgs" for this suite.
Jan  6 18:07:49.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:07:49.973: INFO: namespace: e2e-tests-pods-wrdgs, resource: bindings, ignored listing per whitelist
Jan  6 18:07:50.003: INFO: namespace e2e-tests-pods-wrdgs deletion completed in 6.109954625s

• [SLOW TEST:12.852 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:07:50.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  6 18:07:50.135: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16903cbc-504a-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-62bbt" to be "success or failure"
Jan  6 18:07:50.140: INFO: Pod "downwardapi-volume-16903cbc-504a-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.49006ms
Jan  6 18:07:52.144: INFO: Pod "downwardapi-volume-16903cbc-504a-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008906753s
Jan  6 18:07:54.149: INFO: Pod "downwardapi-volume-16903cbc-504a-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013660258s
STEP: Saw pod success
Jan  6 18:07:54.149: INFO: Pod "downwardapi-volume-16903cbc-504a-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:07:54.152: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-16903cbc-504a-11eb-8655-0242ac110009 container client-container: 
STEP: delete the pod
Jan  6 18:07:54.186: INFO: Waiting for pod downwardapi-volume-16903cbc-504a-11eb-8655-0242ac110009 to disappear
Jan  6 18:07:54.206: INFO: Pod downwardapi-volume-16903cbc-504a-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:07:54.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-62bbt" for this suite.
Jan  6 18:08:00.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:08:00.267: INFO: namespace: e2e-tests-projected-62bbt, resource: bindings, ignored listing per whitelist
Jan  6 18:08:00.338: INFO: namespace e2e-tests-projected-62bbt deletion completed in 6.129183882s

• [SLOW TEST:10.335 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:08:00.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  6 18:08:00.441: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jan  6 18:08:06.706: INFO: Waiting up to 5m0s for pod "client-containers-2070fbdc-504a-11eb-8655-0242ac110009" in namespace "e2e-tests-containers-n5jsx" to be "success or failure"
Jan  6 18:08:06.723: INFO: Pod "client-containers-2070fbdc-504a-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 17.222429ms
Jan  6 18:08:08.727: INFO: Pod "client-containers-2070fbdc-504a-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021690314s
Jan  6 18:08:10.732: INFO: Pod "client-containers-2070fbdc-504a-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025942269s
STEP: Saw pod success
Jan  6 18:08:10.732: INFO: Pod "client-containers-2070fbdc-504a-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:08:10.735: INFO: Trying to get logs from node hunter-worker2 pod client-containers-2070fbdc-504a-11eb-8655-0242ac110009 container test-container: 
STEP: delete the pod
Jan  6 18:08:10.754: INFO: Waiting for pod client-containers-2070fbdc-504a-11eb-8655-0242ac110009 to disappear
Jan  6 18:08:10.785: INFO: Pod client-containers-2070fbdc-504a-11eb-8655-0242ac110009 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:08:10.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-n5jsx" for this suite.
Jan  6 18:08:16.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:08:16.823: INFO: namespace: e2e-tests-containers-n5jsx, resource: bindings, ignored listing per whitelist
Jan  6 18:08:16.904: INFO: namespace e2e-tests-containers-n5jsx deletion completed in 6.115490308s

• [SLOW TEST:10.268 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:08:16.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-ph9f
STEP: Creating a pod to test atomic-volume-subpath
Jan  6 18:08:17.069: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ph9f" in namespace "e2e-tests-subpath-hrdsm" to be "success or failure"
Jan  6 18:08:17.074: INFO: Pod "pod-subpath-test-configmap-ph9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.88009ms
Jan  6 18:08:19.079: INFO: Pod "pod-subpath-test-configmap-ph9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009352068s
Jan  6 18:08:21.083: INFO: Pod "pod-subpath-test-configmap-ph9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013620423s
Jan  6 18:08:23.087: INFO: Pod "pod-subpath-test-configmap-ph9f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017259505s
Jan  6 18:08:25.091: INFO: Pod "pod-subpath-test-configmap-ph9f": Phase="Running", Reason="", readiness=false. Elapsed: 8.021442377s
Jan  6 18:08:27.095: INFO: Pod "pod-subpath-test-configmap-ph9f": Phase="Running", Reason="", readiness=false. Elapsed: 10.02582968s
Jan  6 18:08:29.099: INFO: Pod "pod-subpath-test-configmap-ph9f": Phase="Running", Reason="", readiness=false. Elapsed: 12.02998829s
Jan  6 18:08:31.104: INFO: Pod "pod-subpath-test-configmap-ph9f": Phase="Running", Reason="", readiness=false. Elapsed: 14.034386465s
Jan  6 18:08:33.108: INFO: Pod "pod-subpath-test-configmap-ph9f": Phase="Running", Reason="", readiness=false. Elapsed: 16.038756s
Jan  6 18:08:35.112: INFO: Pod "pod-subpath-test-configmap-ph9f": Phase="Running", Reason="", readiness=false. Elapsed: 18.042780118s
Jan  6 18:08:37.116: INFO: Pod "pod-subpath-test-configmap-ph9f": Phase="Running", Reason="", readiness=false. Elapsed: 20.046735089s
Jan  6 18:08:39.120: INFO: Pod "pod-subpath-test-configmap-ph9f": Phase="Running", Reason="", readiness=false. Elapsed: 22.050793236s
Jan  6 18:08:41.125: INFO: Pod "pod-subpath-test-configmap-ph9f": Phase="Running", Reason="", readiness=false. Elapsed: 24.055336314s
Jan  6 18:08:43.129: INFO: Pod "pod-subpath-test-configmap-ph9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.059519422s
STEP: Saw pod success
Jan  6 18:08:43.129: INFO: Pod "pod-subpath-test-configmap-ph9f" satisfied condition "success or failure"
Jan  6 18:08:43.132: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-ph9f container test-container-subpath-configmap-ph9f: 
STEP: delete the pod
Jan  6 18:08:43.257: INFO: Waiting for pod pod-subpath-test-configmap-ph9f to disappear
Jan  6 18:08:43.311: INFO: Pod pod-subpath-test-configmap-ph9f no longer exists
STEP: Deleting pod pod-subpath-test-configmap-ph9f
Jan  6 18:08:43.311: INFO: Deleting pod "pod-subpath-test-configmap-ph9f" in namespace "e2e-tests-subpath-hrdsm"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:08:43.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-hrdsm" for this suite.
Jan  6 18:08:49.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:08:49.408: INFO: namespace: e2e-tests-subpath-hrdsm, resource: bindings, ignored listing per whitelist
Jan  6 18:08:49.464: INFO: namespace e2e-tests-subpath-hrdsm deletion completed in 6.144835826s

• [SLOW TEST:32.559 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:08:49.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  6 18:08:49.561: INFO: Waiting up to 5m0s for pod "pod-39fb7661-504a-11eb-8655-0242ac110009" in namespace "e2e-tests-emptydir-2cwrf" to be "success or failure"
Jan  6 18:08:49.587: INFO: Pod "pod-39fb7661-504a-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 25.148644ms
Jan  6 18:08:51.679: INFO: Pod "pod-39fb7661-504a-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117593137s
Jan  6 18:08:53.683: INFO: Pod "pod-39fb7661-504a-11eb-8655-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.121444871s
Jan  6 18:08:55.687: INFO: Pod "pod-39fb7661-504a-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.125769796s
STEP: Saw pod success
Jan  6 18:08:55.687: INFO: Pod "pod-39fb7661-504a-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:08:55.690: INFO: Trying to get logs from node hunter-worker2 pod pod-39fb7661-504a-11eb-8655-0242ac110009 container test-container: 
STEP: delete the pod
Jan  6 18:08:55.722: INFO: Waiting for pod pod-39fb7661-504a-11eb-8655-0242ac110009 to disappear
Jan  6 18:08:55.749: INFO: Pod pod-39fb7661-504a-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:08:55.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2cwrf" for this suite.
Jan  6 18:09:01.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:09:01.840: INFO: namespace: e2e-tests-emptydir-2cwrf, resource: bindings, ignored listing per whitelist
Jan  6 18:09:01.903: INFO: namespace e2e-tests-emptydir-2cwrf deletion completed in 6.149962452s

• [SLOW TEST:12.439 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:09:01.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-s24dh
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-s24dh
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-s24dh
Jan  6 18:09:02.071: INFO: Found 0 stateful pods, waiting for 1
Jan  6 18:09:12.077: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan  6 18:09:12.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  6 18:09:12.350: INFO: stderr: "I0106 18:09:12.219400    2295 log.go:172] (0xc000154840) (0xc00073a640) Create stream\nI0106 18:09:12.219497    2295 log.go:172] (0xc000154840) (0xc00073a640) Stream added, broadcasting: 1\nI0106 18:09:12.223167    2295 log.go:172] (0xc000154840) Reply frame received for 1\nI0106 18:09:12.223233    2295 log.go:172] (0xc000154840) (0xc0005dadc0) Create stream\nI0106 18:09:12.223253    2295 log.go:172] (0xc000154840) (0xc0005dadc0) Stream added, broadcasting: 3\nI0106 18:09:12.224553    2295 log.go:172] (0xc000154840) Reply frame received for 3\nI0106 18:09:12.224619    2295 log.go:172] (0xc000154840) (0xc0007ce000) Create stream\nI0106 18:09:12.224641    2295 log.go:172] (0xc000154840) (0xc0007ce000) Stream added, broadcasting: 5\nI0106 18:09:12.226244    2295 log.go:172] (0xc000154840) Reply frame received for 5\nI0106 18:09:12.343394    2295 log.go:172] (0xc000154840) Data frame received for 3\nI0106 18:09:12.343430    2295 log.go:172] (0xc0005dadc0) (3) Data frame handling\nI0106 18:09:12.343465    2295 log.go:172] (0xc0005dadc0) (3) Data frame sent\nI0106 18:09:12.343478    2295 log.go:172] (0xc000154840) Data frame received for 3\nI0106 18:09:12.343488    2295 log.go:172] (0xc0005dadc0) (3) Data frame handling\nI0106 18:09:12.343619    2295 log.go:172] (0xc000154840) Data frame received for 5\nI0106 18:09:12.343658    2295 log.go:172] (0xc0007ce000) (5) Data frame handling\nI0106 18:09:12.345582    2295 log.go:172] (0xc000154840) Data frame received for 1\nI0106 18:09:12.345611    2295 log.go:172] (0xc00073a640) (1) Data frame handling\nI0106 18:09:12.345633    2295 log.go:172] (0xc00073a640) (1) Data frame sent\nI0106 18:09:12.345647    2295 log.go:172] (0xc000154840) (0xc00073a640) Stream removed, broadcasting: 1\nI0106 18:09:12.345661    2295 log.go:172] (0xc000154840) Go away received\nI0106 18:09:12.345947    2295 log.go:172] (0xc000154840) (0xc00073a640) Stream removed, broadcasting: 1\nI0106 18:09:12.345972    2295 log.go:172] (0xc000154840) (0xc0005dadc0) Stream removed, broadcasting: 3\nI0106 18:09:12.345986    2295 log.go:172] (0xc000154840) (0xc0007ce000) Stream removed, broadcasting: 5\n"
Jan  6 18:09:12.350: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  6 18:09:12.350: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  6 18:09:12.353: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  6 18:09:22.363: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  6 18:09:22.363: INFO: Waiting for statefulset status.replicas updated to 0
Jan  6 18:09:22.376: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jan  6 18:09:22.376: INFO: ss-0  hunter-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:02 +0000 UTC  }]
Jan  6 18:09:22.376: INFO: 
Jan  6 18:09:22.376: INFO: StatefulSet ss has not reached scale 3, at 1
Jan  6 18:09:23.445: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993774162s
Jan  6 18:09:24.535: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.924341621s
Jan  6 18:09:25.575: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.834475111s
Jan  6 18:09:26.579: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.79479323s
Jan  6 18:09:27.584: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.791028467s
Jan  6 18:09:28.610: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.785804742s
Jan  6 18:09:29.615: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.759267024s
Jan  6 18:09:30.620: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.754541069s
Jan  6 18:09:31.625: INFO: Verifying statefulset ss doesn't scale past 3 for another 749.440607ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-s24dh
Jan  6 18:09:32.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:09:32.869: INFO: stderr: "I0106 18:09:32.754812    2318 log.go:172] (0xc0007e02c0) (0xc000722640) Create stream\nI0106 18:09:32.754871    2318 log.go:172] (0xc0007e02c0) (0xc000722640) Stream added, broadcasting: 1\nI0106 18:09:32.757370    2318 log.go:172] (0xc0007e02c0) Reply frame received for 1\nI0106 18:09:32.757410    2318 log.go:172] (0xc0007e02c0) (0xc00039ac80) Create stream\nI0106 18:09:32.757418    2318 log.go:172] (0xc0007e02c0) (0xc00039ac80) Stream added, broadcasting: 3\nI0106 18:09:32.758381    2318 log.go:172] (0xc0007e02c0) Reply frame received for 3\nI0106 18:09:32.758428    2318 log.go:172] (0xc0007e02c0) (0xc000678000) Create stream\nI0106 18:09:32.758445    2318 log.go:172] (0xc0007e02c0) (0xc000678000) Stream added, broadcasting: 5\nI0106 18:09:32.759363    2318 log.go:172] (0xc0007e02c0) Reply frame received for 5\nI0106 18:09:32.863352    2318 log.go:172] (0xc0007e02c0) Data frame received for 5\nI0106 18:09:32.863382    2318 log.go:172] (0xc000678000) (5) Data frame handling\nI0106 18:09:32.863421    2318 log.go:172] (0xc0007e02c0) Data frame received for 3\nI0106 18:09:32.863454    2318 log.go:172] (0xc00039ac80) (3) Data frame handling\nI0106 18:09:32.863477    2318 log.go:172] (0xc00039ac80) (3) Data frame sent\nI0106 18:09:32.863487    2318 log.go:172] (0xc0007e02c0) Data frame received for 3\nI0106 18:09:32.863496    2318 log.go:172] (0xc00039ac80) (3) Data frame handling\nI0106 18:09:32.864649    2318 log.go:172] (0xc0007e02c0) Data frame received for 1\nI0106 18:09:32.864669    2318 log.go:172] (0xc000722640) (1) Data frame handling\nI0106 18:09:32.864680    2318 log.go:172] (0xc000722640) (1) Data frame sent\nI0106 18:09:32.864696    2318 log.go:172] (0xc0007e02c0) (0xc000722640) Stream removed, broadcasting: 1\nI0106 18:09:32.864724    2318 log.go:172] (0xc0007e02c0) Go away received\nI0106 18:09:32.864989    2318 log.go:172] (0xc0007e02c0) (0xc000722640) Stream removed, broadcasting: 1\nI0106 18:09:32.865007    2318 log.go:172] (0xc0007e02c0) (0xc00039ac80) Stream removed, broadcasting: 3\nI0106 18:09:32.865017    2318 log.go:172] (0xc0007e02c0) (0xc000678000) Stream removed, broadcasting: 5\n"
Jan  6 18:09:32.869: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  6 18:09:32.869: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  6 18:09:32.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:09:33.072: INFO: stderr: "I0106 18:09:32.995051    2340 log.go:172] (0xc0008342c0) (0xc00072e640) Create stream\nI0106 18:09:32.995114    2340 log.go:172] (0xc0008342c0) (0xc00072e640) Stream added, broadcasting: 1\nI0106 18:09:32.997723    2340 log.go:172] (0xc0008342c0) Reply frame received for 1\nI0106 18:09:32.997763    2340 log.go:172] (0xc0008342c0) (0xc000620c80) Create stream\nI0106 18:09:32.997775    2340 log.go:172] (0xc0008342c0) (0xc000620c80) Stream added, broadcasting: 3\nI0106 18:09:32.998779    2340 log.go:172] (0xc0008342c0) Reply frame received for 3\nI0106 18:09:32.998826    2340 log.go:172] (0xc0008342c0) (0xc0001ee000) Create stream\nI0106 18:09:32.998843    2340 log.go:172] (0xc0008342c0) (0xc0001ee000) Stream added, broadcasting: 5\nI0106 18:09:32.999880    2340 log.go:172] (0xc0008342c0) Reply frame received for 5\nI0106 18:09:33.065690    2340 log.go:172] (0xc0008342c0) Data frame received for 3\nI0106 18:09:33.065750    2340 log.go:172] (0xc000620c80) (3) Data frame handling\nI0106 18:09:33.065787    2340 log.go:172] (0xc000620c80) (3) Data frame sent\nI0106 18:09:33.065808    2340 log.go:172] (0xc0008342c0) Data frame received for 3\nI0106 18:09:33.065826    2340 log.go:172] (0xc000620c80) (3) Data frame handling\nI0106 18:09:33.066059    2340 log.go:172] (0xc0008342c0) Data frame received for 5\nI0106 18:09:33.066088    2340 log.go:172] (0xc0001ee000) (5) Data frame handling\nI0106 18:09:33.066142    2340 log.go:172] (0xc0001ee000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0106 18:09:33.066249    2340 log.go:172] (0xc0008342c0) Data frame received for 5\nI0106 18:09:33.066269    2340 log.go:172] (0xc0001ee000) (5) Data frame handling\nI0106 18:09:33.067744    2340 log.go:172] (0xc0008342c0) Data frame received for 1\nI0106 18:09:33.067764    2340 log.go:172] (0xc00072e640) (1) Data frame handling\nI0106 18:09:33.067771    2340 log.go:172] (0xc00072e640) (1) Data frame sent\nI0106 18:09:33.067779    2340 log.go:172] (0xc0008342c0) (0xc00072e640) Stream removed, broadcasting: 1\nI0106 18:09:33.067790    2340 log.go:172] (0xc0008342c0) Go away received\nI0106 18:09:33.068003    2340 log.go:172] (0xc0008342c0) (0xc00072e640) Stream removed, broadcasting: 1\nI0106 18:09:33.068023    2340 log.go:172] (0xc0008342c0) (0xc000620c80) Stream removed, broadcasting: 3\nI0106 18:09:33.068035    2340 log.go:172] (0xc0008342c0) (0xc0001ee000) Stream removed, broadcasting: 5\n"
Jan  6 18:09:33.072: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  6 18:09:33.072: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  6 18:09:33.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:09:33.285: INFO: stderr: "I0106 18:09:33.197183    2363 log.go:172] (0xc0007ec2c0) (0xc000714640) Create stream\nI0106 18:09:33.197284    2363 log.go:172] (0xc0007ec2c0) (0xc000714640) Stream added, broadcasting: 1\nI0106 18:09:33.200710    2363 log.go:172] (0xc0007ec2c0) Reply frame received for 1\nI0106 18:09:33.201013    2363 log.go:172] (0xc0007ec2c0) (0xc0006a4f00) Create stream\nI0106 18:09:33.201587    2363 log.go:172] (0xc0007ec2c0) (0xc0006a4f00) Stream added, broadcasting: 3\nI0106 18:09:33.202892    2363 log.go:172] (0xc0007ec2c0) Reply frame received for 3\nI0106 18:09:33.202945    2363 log.go:172] (0xc0007ec2c0) (0xc000482000) Create stream\nI0106 18:09:33.202969    2363 log.go:172] (0xc0007ec2c0) (0xc000482000) Stream added, broadcasting: 5\nI0106 18:09:33.204035    2363 log.go:172] (0xc0007ec2c0) Reply frame received for 5\nI0106 18:09:33.278208    2363 log.go:172] (0xc0007ec2c0) Data frame received for 5\nI0106 18:09:33.278245    2363 log.go:172] (0xc000482000) (5) Data frame handling\nI0106 18:09:33.278255    2363 log.go:172] (0xc0007ec2c0) Data frame received for 3\nI0106 18:09:33.278267    2363 log.go:172] (0xc0006a4f00) (3) Data frame handling\nI0106 18:09:33.278275    2363 log.go:172] (0xc0006a4f00) (3) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0106 18:09:33.278295    2363 log.go:172] (0xc000482000) (5) Data frame sent\nI0106 18:09:33.278524    2363 log.go:172] (0xc0007ec2c0) Data frame received for 5\nI0106 18:09:33.278585    2363 log.go:172] (0xc000482000) (5) Data frame handling\nI0106 18:09:33.278629    2363 log.go:172] (0xc0007ec2c0) Data frame received for 3\nI0106 18:09:33.278684    2363 log.go:172] (0xc0006a4f00) (3) Data frame handling\nI0106 18:09:33.280773    2363 log.go:172] (0xc0007ec2c0) Data frame received for 1\nI0106 18:09:33.280805    2363 log.go:172] (0xc000714640) (1) Data frame handling\nI0106 18:09:33.280830    2363 log.go:172] (0xc000714640) (1) Data frame sent\nI0106 18:09:33.280993    2363 log.go:172] (0xc0007ec2c0) (0xc000714640) Stream removed, broadcasting: 1\nI0106 18:09:33.281016    2363 log.go:172] (0xc0007ec2c0) Go away received\nI0106 18:09:33.281294    2363 log.go:172] (0xc0007ec2c0) (0xc000714640) Stream removed, broadcasting: 1\nI0106 18:09:33.281330    2363 log.go:172] (0xc0007ec2c0) (0xc0006a4f00) Stream removed, broadcasting: 3\nI0106 18:09:33.281346    2363 log.go:172] (0xc0007ec2c0) (0xc000482000) Stream removed, broadcasting: 5\n"
Jan  6 18:09:33.286: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  6 18:09:33.286: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  6 18:09:33.290: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Jan  6 18:09:43.295: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  6 18:09:43.295: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  6 18:09:43.295: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan  6 18:09:43.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  6 18:09:43.542: INFO: stderr: "I0106 18:09:43.433642    2386 log.go:172] (0xc0008362c0) (0xc00071e640) Create stream\nI0106 18:09:43.433689    2386 log.go:172] (0xc0008362c0) (0xc00071e640) Stream added, broadcasting: 1\nI0106 18:09:43.435928    2386 log.go:172] (0xc0008362c0) Reply frame received for 1\nI0106 18:09:43.435984    2386 log.go:172] (0xc0008362c0) (0xc0006c8d20) Create stream\nI0106 18:09:43.436004    2386 log.go:172] (0xc0008362c0) (0xc0006c8d20) Stream added, broadcasting: 3\nI0106 18:09:43.437369    2386 log.go:172] (0xc0008362c0) Reply frame received for 3\nI0106 18:09:43.437443    2386 log.go:172] (0xc0008362c0) (0xc0002c2000) Create stream\nI0106 18:09:43.437478    2386 log.go:172] (0xc0008362c0) (0xc0002c2000) Stream added, broadcasting: 5\nI0106 18:09:43.438538    2386 log.go:172] (0xc0008362c0) Reply frame received for 5\nI0106 18:09:43.535654    2386 log.go:172] (0xc0008362c0) Data frame received for 5\nI0106 18:09:43.535697    2386 log.go:172] (0xc0002c2000) (5) Data frame handling\nI0106 18:09:43.535732    2386 log.go:172] (0xc0008362c0) Data frame received for 3\nI0106 18:09:43.535751    2386 log.go:172] (0xc0006c8d20) (3) Data frame handling\nI0106 18:09:43.535774    2386 log.go:172] (0xc0006c8d20) (3) Data frame sent\nI0106 18:09:43.535799    2386 log.go:172] (0xc0008362c0) Data frame received for 3\nI0106 18:09:43.535816    2386 log.go:172] (0xc0006c8d20) (3) Data frame handling\nI0106 18:09:43.537615    2386 log.go:172] (0xc0008362c0) Data frame received for 1\nI0106 18:09:43.537644    2386 log.go:172] (0xc00071e640) (1) Data frame handling\nI0106 18:09:43.537665    2386 log.go:172] (0xc00071e640) (1) Data frame sent\nI0106 18:09:43.537690    2386 log.go:172] (0xc0008362c0) (0xc00071e640) Stream removed, broadcasting: 1\nI0106 18:09:43.537739    2386 log.go:172] (0xc0008362c0) Go away received\nI0106 18:09:43.537879    2386 log.go:172] (0xc0008362c0) (0xc00071e640) Stream removed, broadcasting: 1\nI0106 18:09:43.537895    2386 log.go:172] (0xc0008362c0) (0xc0006c8d20) Stream removed, broadcasting: 3\nI0106 18:09:43.537905    2386 log.go:172] (0xc0008362c0) (0xc0002c2000) Stream removed, broadcasting: 5\n"
Jan  6 18:09:43.542: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  6 18:09:43.542: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  6 18:09:43.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  6 18:09:43.800: INFO: stderr: "I0106 18:09:43.666619    2409 log.go:172] (0xc00014c840) (0xc00060d400) Create stream\nI0106 18:09:43.666691    2409 log.go:172] (0xc00014c840) (0xc00060d400) Stream added, broadcasting: 1\nI0106 18:09:43.669176    2409 log.go:172] (0xc00014c840) Reply frame received for 1\nI0106 18:09:43.669243    2409 log.go:172] (0xc00014c840) (0xc00060d4a0) Create stream\nI0106 18:09:43.669267    2409 log.go:172] (0xc00014c840) (0xc00060d4a0) Stream added, broadcasting: 3\nI0106 18:09:43.670224    2409 log.go:172] (0xc00014c840) Reply frame received for 3\nI0106 18:09:43.670257    2409 log.go:172] (0xc00014c840) (0xc000428000) Create stream\nI0106 18:09:43.670274    2409 log.go:172] (0xc00014c840) (0xc000428000) Stream added, broadcasting: 5\nI0106 18:09:43.671043    2409 log.go:172] (0xc00014c840) Reply frame received for 5\nI0106 18:09:43.793103    2409 log.go:172] (0xc00014c840) Data frame received for 5\nI0106 18:09:43.793140    2409 log.go:172] (0xc000428000) (5) Data frame handling\nI0106 18:09:43.793162    2409 log.go:172] (0xc00014c840) Data frame received for 3\nI0106 18:09:43.793181    2409 log.go:172] (0xc00060d4a0) (3) Data frame handling\nI0106 18:09:43.793199    2409 log.go:172] (0xc00060d4a0) (3) Data frame sent\nI0106 18:09:43.793205    2409 log.go:172] (0xc00014c840) Data frame received for 3\nI0106 18:09:43.793210    2409 log.go:172] (0xc00060d4a0) (3) Data frame handling\nI0106 18:09:43.794870    2409 log.go:172] (0xc00014c840) Data frame received for 1\nI0106 18:09:43.794887    2409 log.go:172] (0xc00060d400) (1) Data frame handling\nI0106 18:09:43.794895    2409 log.go:172] (0xc00060d400) (1) Data frame sent\nI0106 18:09:43.794911    2409 log.go:172] (0xc00014c840) (0xc00060d400) Stream removed, broadcasting: 1\nI0106 18:09:43.794923    2409 log.go:172] (0xc00014c840) Go away received\nI0106 18:09:43.795142    2409 log.go:172] (0xc00014c840) (0xc00060d400) Stream removed, broadcasting: 1\nI0106 18:09:43.795183    2409 log.go:172] (0xc00014c840) (0xc00060d4a0) Stream removed, broadcasting: 3\nI0106 18:09:43.795209    2409 log.go:172] (0xc00014c840) (0xc000428000) Stream removed, broadcasting: 5\n"
Jan  6 18:09:43.800: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  6 18:09:43.800: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  6 18:09:43.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  6 18:09:44.016: INFO: stderr: "I0106 18:09:43.915512    2432 log.go:172] (0xc0008322c0) (0xc000778640) Create stream\nI0106 18:09:43.915557    2432 log.go:172] (0xc0008322c0) (0xc000778640) Stream added, broadcasting: 1\nI0106 18:09:43.917281    2432 log.go:172] (0xc0008322c0) Reply frame received for 1\nI0106 18:09:43.917305    2432 log.go:172] (0xc0008322c0) (0xc0005bad20) Create stream\nI0106 18:09:43.917314    2432 log.go:172] (0xc0008322c0) (0xc0005bad20) Stream added, broadcasting: 3\nI0106 18:09:43.918205    2432 log.go:172] (0xc0008322c0) Reply frame received for 3\nI0106 18:09:43.918258    2432 log.go:172] (0xc0008322c0) (0xc0007786e0) Create stream\nI0106 18:09:43.918273    2432 log.go:172] (0xc0008322c0) (0xc0007786e0) Stream added, broadcasting: 5\nI0106 18:09:43.919201    2432 log.go:172] (0xc0008322c0) Reply frame received for 5\nI0106 18:09:44.009013    2432 log.go:172] (0xc0008322c0) Data frame received for 3\nI0106 18:09:44.009074    2432 log.go:172] (0xc0005bad20) (3) Data frame handling\nI0106 18:09:44.009091    2432 log.go:172] (0xc0005bad20) (3) Data frame sent\nI0106 18:09:44.009109    2432 log.go:172] (0xc0008322c0) Data frame received for 3\nI0106 18:09:44.009123    2432 log.go:172] (0xc0005bad20) (3) Data frame handling\nI0106 18:09:44.009162    2432 log.go:172] (0xc0008322c0) Data frame received for 5\nI0106 18:09:44.009174    2432 log.go:172] (0xc0007786e0) (5) Data frame handling\nI0106 18:09:44.011144    2432 log.go:172] (0xc0008322c0) Data frame received for 1\nI0106 18:09:44.011180    2432 log.go:172] (0xc000778640) (1) Data frame handling\nI0106 18:09:44.011227    2432 log.go:172] (0xc000778640) (1) Data frame sent\nI0106 18:09:44.011262    2432 log.go:172] (0xc0008322c0) (0xc000778640) Stream removed, broadcasting: 1\nI0106 18:09:44.011304    2432 log.go:172] (0xc0008322c0) Go away received\nI0106 18:09:44.011530    2432 log.go:172] (0xc0008322c0) (0xc000778640) Stream removed, broadcasting: 1\nI0106 18:09:44.011559    2432 log.go:172] (0xc0008322c0) (0xc0005bad20) Stream removed, broadcasting: 3\nI0106 18:09:44.011572    2432 log.go:172] (0xc0008322c0) (0xc0007786e0) Stream removed, broadcasting: 5\n"
Jan  6 18:09:44.016: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  6 18:09:44.016: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  6 18:09:44.016: INFO: Waiting for statefulset status.replicas updated to 0
Jan  6 18:09:44.020: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Jan  6 18:09:54.030: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  6 18:09:54.030: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  6 18:09:54.030: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  6 18:09:54.041: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jan  6 18:09:54.042: INFO: ss-0  hunter-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:02 +0000 UTC  }]
Jan  6 18:09:54.042: INFO: ss-1  hunter-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  }]
Jan  6 18:09:54.042: INFO: ss-2  hunter-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  }]
Jan  6 18:09:54.042: INFO: 
Jan  6 18:09:54.042: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  6 18:09:55.219: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jan  6 18:09:55.219: INFO: ss-0  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:02 +0000 UTC  }]
Jan  6 18:09:55.219: INFO: ss-1  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  }]
Jan  6 18:09:55.219: INFO: ss-2  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  }]
Jan  6 18:09:55.219: INFO: 
Jan  6 18:09:55.219: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  6 18:09:56.224: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jan  6 18:09:56.224: INFO: ss-0  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:02 +0000 UTC  }]
Jan  6 18:09:56.224: INFO: ss-1  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  }]
Jan  6 18:09:56.224: INFO: ss-2  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  }]
Jan  6 18:09:56.224: INFO: 
Jan  6 18:09:56.224: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  6 18:09:57.230: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jan  6 18:09:57.230: INFO: ss-1  hunter-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  }]
Jan  6 18:09:57.230: INFO: ss-2  hunter-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  }]
Jan  6 18:09:57.230: INFO: 
Jan  6 18:09:57.230: INFO: StatefulSet ss has not reached scale 0, at 2
Jan  6 18:09:58.235: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jan  6 18:09:58.235: INFO: ss-1  hunter-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  }]
Jan  6 18:09:58.235: INFO: ss-2  hunter-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  }]
Jan  6 18:09:58.235: INFO: 
Jan  6 18:09:58.235: INFO: StatefulSet ss has not reached scale 0, at 2
Jan  6 18:09:59.241: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jan  6 18:09:59.241: INFO: ss-1  hunter-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  }]
Jan  6 18:09:59.241: INFO: ss-2  hunter-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  }]
Jan  6 18:09:59.241: INFO: 
Jan  6 18:09:59.241: INFO: StatefulSet ss has not reached scale 0, at 2
Jan  6 18:10:00.246: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jan  6 18:10:00.246: INFO: ss-1  hunter-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  }]
Jan  6 18:10:00.246: INFO: ss-2  hunter-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  }]
Jan  6 18:10:00.246: INFO: 
Jan  6 18:10:00.246: INFO: StatefulSet ss has not reached scale 0, at 2
Jan  6 18:10:01.251: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jan  6 18:10:01.252: INFO: ss-1  hunter-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  }]
Jan  6 18:10:01.252: INFO: ss-2  hunter-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  }]
Jan  6 18:10:01.252: INFO: 
Jan  6 18:10:01.252: INFO: StatefulSet ss has not reached scale 0, at 2
Jan  6 18:10:02.257: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jan  6 18:10:02.257: INFO: ss-1  hunter-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  }]
Jan  6 18:10:02.257: INFO: ss-2  hunter-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  }]
Jan  6 18:10:02.257: INFO: 
Jan  6 18:10:02.257: INFO: StatefulSet ss has not reached scale 0, at 2
Jan  6 18:10:03.262: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jan  6 18:10:03.262: INFO: ss-1  hunter-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  }]
Jan  6 18:10:03.262: INFO: ss-2  hunter-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:09:22 +0000 UTC  }]
Jan  6 18:10:03.262: INFO: 
Jan  6 18:10:03.262: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-s24dh
Jan  6 18:10:04.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:10:04.400: INFO: rc: 1
Jan  6 18:10:04.400: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001960090 exit status 1   true [0xc000039350 0xc000039368 0xc000039380] [0xc000039350 0xc000039368 0xc000039380] [0xc000039360 0xc000039378] [0x935700 0x935700] 0xc0020d4060 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan  6 18:10:14.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:10:14.480: INFO: rc: 1
Jan  6 18:10:14.480: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0019601b0 exit status 1   true [0xc000039388 0xc0000393a0 0xc0000393b8] [0xc000039388 0xc0000393a0 0xc0000393b8] [0xc000039398 0xc0000393b0] [0x935700 0x935700] 0xc0020d4cc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:10:24.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:10:24.565: INFO: rc: 1
Jan  6 18:10:24.565: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0024b21b0 exit status 1   true [0xc0003f20c0 0xc0003f22d0 0xc0003f24b8] [0xc0003f20c0 0xc0003f22d0 0xc0003f24b8] [0xc0003f22a8 0xc0003f2460] [0x935700 0x935700] 0xc0024d01e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:10:34.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:10:34.664: INFO: rc: 1
Jan  6 18:10:34.664: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001e7c120 exit status 1   true [0xc00016e000 0xc00023cca8 0xc00023ccd8] [0xc00016e000 0xc00023cca8 0xc00023ccd8] [0xc00023cc48 0xc00023ccc8] [0x935700 0x935700] 0xc001f2f380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:10:44.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:10:44.748: INFO: rc: 1
Jan  6 18:10:44.748: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000e58480 exit status 1   true [0xc000d74000 0xc000d74018 0xc000d74030] [0xc000d74000 0xc000d74018 0xc000d74030] [0xc000d74010 0xc000d74028] [0x935700 0x935700] 0xc001a84300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:10:54.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:10:54.836: INFO: rc: 1
Jan  6 18:10:54.836: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0021a6120 exit status 1   true [0xc0019c0000 0xc0019c0018 0xc0019c0030] [0xc0019c0000 0xc0019c0018 0xc0019c0030] [0xc0019c0010 0xc0019c0028] [0x935700 0x935700] 0xc001c0ca20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:11:04.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:11:04.923: INFO: rc: 1
Jan  6 18:11:04.923: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0021a6270 exit status 1   true [0xc0019c0038 0xc0019c0050 0xc0019c0068] [0xc0019c0038 0xc0019c0050 0xc0019c0068] [0xc0019c0048 0xc0019c0060] [0x935700 0x935700] 0xc001c0d2c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:11:14.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:11:15.021: INFO: rc: 1
Jan  6 18:11:15.021: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0024b2300 exit status 1   true [0xc0003f24d8 0xc0003f2f48 0xc0003f30f8] [0xc0003f24d8 0xc0003f2f48 0xc0003f30f8] [0xc0003f2f08 0xc0003f3088] [0x935700 0x935700] 0xc0024d0480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:11:25.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:11:25.103: INFO: rc: 1
Jan  6 18:11:25.103: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0021a63c0 exit status 1   true [0xc0019c0070 0xc0019c0088 0xc0019c00a0] [0xc0019c0070 0xc0019c0088 0xc0019c00a0] [0xc0019c0080 0xc0019c0098] [0x935700 0x935700] 0xc001c0d5c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:11:35.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:11:35.203: INFO: rc: 1
Jan  6 18:11:35.203: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000e58750 exit status 1   true [0xc000d74038 0xc000d74050 0xc000d74078] [0xc000d74038 0xc000d74050 0xc000d74078] [0xc000d74048 0xc000d74070] [0x935700 0x935700] 0xc001a84600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:11:45.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:11:45.297: INFO: rc: 1
Jan  6 18:11:45.297: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000e588d0 exit status 1   true [0xc000d74080 0xc000d74098 0xc000d740b0] [0xc000d74080 0xc000d74098 0xc000d740b0] [0xc000d74090 0xc000d740a8] [0x935700 0x935700] 0xc001a84900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:11:55.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:11:55.392: INFO: rc: 1
Jan  6 18:11:55.392: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001e7c240 exit status 1   true [0xc00023ccf8 0xc00023cd88 0xc00023ce40] [0xc00023ccf8 0xc00023cd88 0xc00023ce40] [0xc00023cd40 0xc00023ce30] [0x935700 0x935700] 0xc001f2f620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:12:05.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:12:05.491: INFO: rc: 1
Jan  6 18:12:05.491: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001e7c390 exit status 1   true [0xc00023ce48 0xc00023ceb8 0xc00023cf40] [0xc00023ce48 0xc00023ceb8 0xc00023cf40] [0xc00023ce78 0xc00023cef8] [0x935700 0x935700] 0xc001f2f8c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:12:15.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:12:15.575: INFO: rc: 1
Jan  6 18:12:15.575: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0024b2480 exit status 1   true [0xc0003f3100 0xc0003f31e0 0xc0003f3248] [0xc0003f3100 0xc0003f31e0 0xc0003f3248] [0xc0003f3190 0xc0003f3230] [0x935700 0x935700] 0xc0024d0720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:12:25.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:12:25.663: INFO: rc: 1
Jan  6 18:12:25.663: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0021a6420 exit status 1   true [0xc0003f3260 0xc0003f3330 0xc0003f33d8] [0xc0003f3260 0xc0003f3330 0xc0003f33d8] [0xc0003f3318 0xc0003f3380] [0x935700 0x935700] 0xc001c0d680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:12:35.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:12:35.754: INFO: rc: 1
Jan  6 18:12:35.755: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0024b21e0 exit status 1   true [0xc00016e000 0xc000d74010 0xc000d74028] [0xc00016e000 0xc000d74010 0xc000d74028] [0xc000d74008 0xc000d74020] [0x935700 0x935700] 0xc001a84300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:12:45.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:12:45.844: INFO: rc: 1
Jan  6 18:12:45.844: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001e7c150 exit status 1   true [0xc0003f20a8 0xc0003f22a8 0xc0003f2460] [0xc0003f20a8 0xc0003f22a8 0xc0003f2460] [0xc0003f2268 0xc0003f2450] [0x935700 0x935700] 0xc001f2f380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:12:55.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:12:55.932: INFO: rc: 1
Jan  6 18:12:55.933: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001e7c2d0 exit status 1   true [0xc0003f24b8 0xc0003f2f08 0xc0003f3088] [0xc0003f24b8 0xc0003f2f08 0xc0003f3088] [0xc0003f2d28 0xc0003f3038] [0x935700 0x935700] 0xc001f2f620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:13:05.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:13:06.035: INFO: rc: 1
Jan  6 18:13:06.036: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0021a6150 exit status 1   true [0xc00023cc48 0xc00023ccc8 0xc00023cd20] [0xc00023cc48 0xc00023ccc8 0xc00023cd20] [0xc00023ccb8 0xc00023ccf8] [0x935700 0x935700] 0xc0024d01e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:13:16.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:13:16.124: INFO: rc: 1
Jan  6 18:13:16.124: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000e585d0 exit status 1   true [0xc0019c0000 0xc0019c0018 0xc0019c0030] [0xc0019c0000 0xc0019c0018 0xc0019c0030] [0xc0019c0010 0xc0019c0028] [0x935700 0x935700] 0xc001c0ca20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:13:26.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:13:26.220: INFO: rc: 1
Jan  6 18:13:26.220: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000e587e0 exit status 1   true [0xc0019c0038 0xc0019c0050 0xc0019c0068] [0xc0019c0038 0xc0019c0050 0xc0019c0068] [0xc0019c0048 0xc0019c0060] [0x935700 0x935700] 0xc001c0d2c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:13:36.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:13:36.298: INFO: rc: 1
Jan  6 18:13:36.298: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0021a62d0 exit status 1   true [0xc00023cd40 0xc00023ce30 0xc00023ce58] [0xc00023cd40 0xc00023ce30 0xc00023ce58] [0xc00023cdb0 0xc00023ce48] [0x935700 0x935700] 0xc0024d0480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:13:46.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:13:46.382: INFO: rc: 1
Jan  6 18:13:46.382: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0021a6480 exit status 1   true [0xc00023ce78 0xc00023cef8 0xc00023cf70] [0xc00023ce78 0xc00023cef8 0xc00023cf70] [0xc00023cec8 0xc00023cf48] [0x935700 0x935700] 0xc0024d0720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:13:56.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:13:56.463: INFO: rc: 1
Jan  6 18:13:56.463: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0024b2330 exit status 1   true [0xc000d74030 0xc000d74048 0xc000d74070] [0xc000d74030 0xc000d74048 0xc000d74070] [0xc000d74040 0xc000d74058] [0x935700 0x935700] 0xc001a84600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:14:06.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:14:06.545: INFO: rc: 1
Jan  6 18:14:06.545: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0024b24e0 exit status 1   true [0xc000d74078 0xc000d74090 0xc000d740a8] [0xc000d74078 0xc000d74090 0xc000d740a8] [0xc000d74088 0xc000d740a0] [0x935700 0x935700] 0xc001a84900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:14:16.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:14:16.646: INFO: rc: 1
Jan  6 18:14:16.646: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0021a65a0 exit status 1   true [0xc00023cf78 0xc00023cfe8 0xc00023d038] [0xc00023cf78 0xc00023cfe8 0xc00023d038] [0xc00023cfc0 0xc00023d020] [0x935700 0x935700] 0xc0024d0ba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:14:26.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:14:26.731: INFO: rc: 1
Jan  6 18:14:26.731: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000e58480 exit status 1   true [0xc00000e130 0xc0019c0010 0xc0019c0028] [0xc00000e130 0xc0019c0010 0xc0019c0028] [0xc0019c0008 0xc0019c0020] [0x935700 0x935700] 0xc001c0ca20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:14:36.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:14:36.830: INFO: rc: 1
Jan  6 18:14:36.830: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000e58720 exit status 1   true [0xc0019c0030 0xc0019c0048 0xc0019c0060] [0xc0019c0030 0xc0019c0048 0xc0019c0060] [0xc0019c0040 0xc0019c0058] [0x935700 0x935700] 0xc001c0d2c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:14:46.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:14:46.915: INFO: rc: 1
Jan  6 18:14:46.915: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000e588a0 exit status 1   true [0xc0019c0068 0xc0019c0080 0xc0019c0098] [0xc0019c0068 0xc0019c0080 0xc0019c0098] [0xc0019c0078 0xc0019c0090] [0x935700 0x935700] 0xc001c0d5c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:14:56.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:14:57.008: INFO: rc: 1
Jan  6 18:14:57.008: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000e589c0 exit status 1   true [0xc0019c00a0 0xc0019c00b8 0xc0019c00d0] [0xc0019c00a0 0xc0019c00b8 0xc0019c00d0] [0xc0019c00b0 0xc0019c00c8] [0x935700 0x935700] 0xc0024d0000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  6 18:15:07.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-s24dh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:15:07.096: INFO: rc: 1
Jan  6 18:15:07.096: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: 
Jan  6 18:15:07.096: INFO: Scaling statefulset ss to 0
Jan  6 18:15:07.104: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  6 18:15:07.106: INFO: Deleting all statefulset in ns e2e-tests-statefulset-s24dh
Jan  6 18:15:07.108: INFO: Scaling statefulset ss to 0
Jan  6 18:15:07.116: INFO: Waiting for statefulset status.replicas updated to 0
Jan  6 18:15:07.119: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:15:07.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-s24dh" for this suite.
Jan  6 18:15:13.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:15:13.291: INFO: namespace: e2e-tests-statefulset-s24dh, resource: bindings, ignored listing per whitelist
Jan  6 18:15:13.304: INFO: namespace e2e-tests-statefulset-s24dh deletion completed in 6.171555776s

• [SLOW TEST:371.401 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:15:13.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  6 18:15:13.407: INFO: Waiting up to 5m0s for pod "pod-1ec522cd-504b-11eb-8655-0242ac110009" in namespace "e2e-tests-emptydir-8wqk7" to be "success or failure"
Jan  6 18:15:13.451: INFO: Pod "pod-1ec522cd-504b-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 43.799242ms
Jan  6 18:15:15.454: INFO: Pod "pod-1ec522cd-504b-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04766284s
Jan  6 18:15:17.459: INFO: Pod "pod-1ec522cd-504b-11eb-8655-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.0520355s
Jan  6 18:15:19.463: INFO: Pod "pod-1ec522cd-504b-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056369504s
STEP: Saw pod success
Jan  6 18:15:19.463: INFO: Pod "pod-1ec522cd-504b-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:15:19.466: INFO: Trying to get logs from node hunter-worker2 pod pod-1ec522cd-504b-11eb-8655-0242ac110009 container test-container: 
STEP: delete the pod
Jan  6 18:15:19.501: INFO: Waiting for pod pod-1ec522cd-504b-11eb-8655-0242ac110009 to disappear
Jan  6 18:15:19.520: INFO: Pod pod-1ec522cd-504b-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:15:19.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8wqk7" for this suite.
Jan  6 18:15:25.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:15:25.668: INFO: namespace: e2e-tests-emptydir-8wqk7, resource: bindings, ignored listing per whitelist
Jan  6 18:15:25.670: INFO: namespace e2e-tests-emptydir-8wqk7 deletion completed in 6.146576491s

• [SLOW TEST:12.366 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:15:25.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:16:25.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-wlk2l" for this suite.
Jan  6 18:16:47.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:16:47.923: INFO: namespace: e2e-tests-container-probe-wlk2l, resource: bindings, ignored listing per whitelist
Jan  6 18:16:47.941: INFO: namespace e2e-tests-container-probe-wlk2l deletion completed in 22.108339435s

• [SLOW TEST:82.270 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:16:47.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  6 18:16:52.624: INFO: Successfully updated pod "labelsupdate57328291-504b-11eb-8655-0242ac110009"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:16:54.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-q8t8m" for this suite.
Jan  6 18:17:16.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:17:16.753: INFO: namespace: e2e-tests-projected-q8t8m, resource: bindings, ignored listing per whitelist
Jan  6 18:17:16.817: INFO: namespace e2e-tests-projected-q8t8m deletion completed in 22.160042032s

• [SLOW TEST:28.876 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:17:16.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  6 18:17:16.897: INFO: Waiting up to 5m0s for pod "pod-686130d8-504b-11eb-8655-0242ac110009" in namespace "e2e-tests-emptydir-tzdnf" to be "success or failure"
Jan  6 18:17:16.956: INFO: Pod "pod-686130d8-504b-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 59.663692ms
Jan  6 18:17:18.961: INFO: Pod "pod-686130d8-504b-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063986984s
Jan  6 18:17:20.965: INFO: Pod "pod-686130d8-504b-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068225857s
STEP: Saw pod success
Jan  6 18:17:20.965: INFO: Pod "pod-686130d8-504b-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:17:20.968: INFO: Trying to get logs from node hunter-worker2 pod pod-686130d8-504b-11eb-8655-0242ac110009 container test-container: 
STEP: delete the pod
Jan  6 18:17:21.046: INFO: Waiting for pod pod-686130d8-504b-11eb-8655-0242ac110009 to disappear
Jan  6 18:17:21.100: INFO: Pod pod-686130d8-504b-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:17:21.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-tzdnf" for this suite.
Jan  6 18:17:27.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:17:27.282: INFO: namespace: e2e-tests-emptydir-tzdnf, resource: bindings, ignored listing per whitelist
Jan  6 18:17:27.282: INFO: namespace e2e-tests-emptydir-tzdnf deletion completed in 6.178464337s

• [SLOW TEST:10.465 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:17:27.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  6 18:17:31.964: INFO: Successfully updated pod "annotationupdate6ea5e94e-504b-11eb-8655-0242ac110009"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:17:34.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qlwpg" for this suite.
Jan  6 18:17:56.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:17:56.118: INFO: namespace: e2e-tests-projected-qlwpg, resource: bindings, ignored listing per whitelist
Jan  6 18:17:56.127: INFO: namespace e2e-tests-projected-qlwpg deletion completed in 22.109096776s

• [SLOW TEST:28.844 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:17:56.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-7fd1c12c-504b-11eb-8655-0242ac110009
STEP: Creating a pod to test consume secrets
Jan  6 18:17:56.238: INFO: Waiting up to 5m0s for pod "pod-secrets-7fd41219-504b-11eb-8655-0242ac110009" in namespace "e2e-tests-secrets-j6x8v" to be "success or failure"
Jan  6 18:17:56.268: INFO: Pod "pod-secrets-7fd41219-504b-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 29.363413ms
Jan  6 18:17:58.430: INFO: Pod "pod-secrets-7fd41219-504b-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191773146s
Jan  6 18:18:00.759: INFO: Pod "pod-secrets-7fd41219-504b-11eb-8655-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.52088746s
Jan  6 18:18:02.764: INFO: Pod "pod-secrets-7fd41219-504b-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.525680624s
STEP: Saw pod success
Jan  6 18:18:02.764: INFO: Pod "pod-secrets-7fd41219-504b-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:18:02.766: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-7fd41219-504b-11eb-8655-0242ac110009 container secret-env-test: 
STEP: delete the pod
Jan  6 18:18:02.892: INFO: Waiting for pod pod-secrets-7fd41219-504b-11eb-8655-0242ac110009 to disappear
Jan  6 18:18:02.913: INFO: Pod pod-secrets-7fd41219-504b-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:18:02.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-j6x8v" for this suite.
Jan  6 18:18:08.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:18:09.009: INFO: namespace: e2e-tests-secrets-j6x8v, resource: bindings, ignored listing per whitelist
Jan  6 18:18:09.059: INFO: namespace e2e-tests-secrets-j6x8v deletion completed in 6.144138171s

• [SLOW TEST:12.932 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:18:09.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  6 18:18:09.170: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87896f53-504b-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-dwzwn" to be "success or failure"
Jan  6 18:18:09.177: INFO: Pod "downwardapi-volume-87896f53-504b-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.600347ms
Jan  6 18:18:11.454: INFO: Pod "downwardapi-volume-87896f53-504b-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28379909s
Jan  6 18:18:13.616: INFO: Pod "downwardapi-volume-87896f53-504b-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.445426317s
Jan  6 18:18:15.620: INFO: Pod "downwardapi-volume-87896f53-504b-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.449134311s
STEP: Saw pod success
Jan  6 18:18:15.620: INFO: Pod "downwardapi-volume-87896f53-504b-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:18:15.622: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-87896f53-504b-11eb-8655-0242ac110009 container client-container: 
STEP: delete the pod
Jan  6 18:18:15.713: INFO: Waiting for pod downwardapi-volume-87896f53-504b-11eb-8655-0242ac110009 to disappear
Jan  6 18:18:15.716: INFO: Pod downwardapi-volume-87896f53-504b-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:18:15.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dwzwn" for this suite.
Jan  6 18:18:21.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:18:21.809: INFO: namespace: e2e-tests-projected-dwzwn, resource: bindings, ignored listing per whitelist
Jan  6 18:18:21.851: INFO: namespace e2e-tests-projected-dwzwn deletion completed in 6.129715171s

• [SLOW TEST:12.791 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:18:21.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  6 18:18:21.927: INFO: Waiting up to 5m0s for pod "pod-8f235e6d-504b-11eb-8655-0242ac110009" in namespace "e2e-tests-emptydir-k9dg7" to be "success or failure"
Jan  6 18:18:21.975: INFO: Pod "pod-8f235e6d-504b-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 47.645465ms
Jan  6 18:18:23.978: INFO: Pod "pod-8f235e6d-504b-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050819766s
Jan  6 18:18:27.036: INFO: Pod "pod-8f235e6d-504b-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 5.109153099s
Jan  6 18:18:29.040: INFO: Pod "pod-8f235e6d-504b-11eb-8655-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 7.112648037s
Jan  6 18:18:31.044: INFO: Pod "pod-8f235e6d-504b-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.117227145s
STEP: Saw pod success
Jan  6 18:18:31.044: INFO: Pod "pod-8f235e6d-504b-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:18:31.047: INFO: Trying to get logs from node hunter-worker pod pod-8f235e6d-504b-11eb-8655-0242ac110009 container test-container: 
STEP: delete the pod
Jan  6 18:18:31.083: INFO: Waiting for pod pod-8f235e6d-504b-11eb-8655-0242ac110009 to disappear
Jan  6 18:18:31.099: INFO: Pod pod-8f235e6d-504b-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:18:31.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-k9dg7" for this suite.
Jan  6 18:18:37.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:18:37.139: INFO: namespace: e2e-tests-emptydir-k9dg7, resource: bindings, ignored listing per whitelist
Jan  6 18:18:37.223: INFO: namespace e2e-tests-emptydir-k9dg7 deletion completed in 6.119068806s

• [SLOW TEST:15.372 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:18:37.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  6 18:18:37.403: INFO: Waiting up to 5m0s for pod "downwardapi-volume-985d983a-504b-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-srs78" to be "success or failure"
Jan  6 18:18:37.411: INFO: Pod "downwardapi-volume-985d983a-504b-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.999952ms
Jan  6 18:18:39.461: INFO: Pod "downwardapi-volume-985d983a-504b-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057092504s
Jan  6 18:18:41.464: INFO: Pod "downwardapi-volume-985d983a-504b-11eb-8655-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.06072031s
Jan  6 18:18:43.469: INFO: Pod "downwardapi-volume-985d983a-504b-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065301095s
STEP: Saw pod success
Jan  6 18:18:43.469: INFO: Pod "downwardapi-volume-985d983a-504b-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:18:43.472: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-985d983a-504b-11eb-8655-0242ac110009 container client-container: 
STEP: delete the pod
Jan  6 18:18:43.575: INFO: Waiting for pod downwardapi-volume-985d983a-504b-11eb-8655-0242ac110009 to disappear
Jan  6 18:18:43.578: INFO: Pod downwardapi-volume-985d983a-504b-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:18:43.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-srs78" for this suite.
Jan  6 18:18:49.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:18:49.617: INFO: namespace: e2e-tests-projected-srs78, resource: bindings, ignored listing per whitelist
Jan  6 18:18:49.678: INFO: namespace e2e-tests-projected-srs78 deletion completed in 6.097563448s

• [SLOW TEST:12.455 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:18:49.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  6 18:18:49.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jan  6 18:18:49.863: INFO: stderr: ""
Jan  6 18:18:49.863: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-12-13T01:19:52Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jan  6 18:18:49.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qvsjg'
Jan  6 18:18:52.444: INFO: stderr: ""
Jan  6 18:18:52.444: INFO: stdout: "replicationcontroller/redis-master created\n"
Jan  6 18:18:52.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qvsjg'
Jan  6 18:18:52.700: INFO: stderr: ""
Jan  6 18:18:52.700: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  6 18:18:53.704: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 18:18:53.705: INFO: Found 0 / 1
Jan  6 18:18:54.705: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 18:18:54.705: INFO: Found 0 / 1
Jan  6 18:18:55.705: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 18:18:55.705: INFO: Found 0 / 1
Jan  6 18:18:56.706: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 18:18:56.706: INFO: Found 1 / 1
Jan  6 18:18:56.706: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  6 18:18:56.709: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 18:18:56.709: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  6 18:18:56.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-7nb4s --namespace=e2e-tests-kubectl-qvsjg'
Jan  6 18:18:56.831: INFO: stderr: ""
Jan  6 18:18:56.831: INFO: stdout: "Name:               redis-master-7nb4s\nNamespace:          e2e-tests-kubectl-qvsjg\nPriority:           0\nPriorityClassName:  \nNode:               hunter-worker/172.18.0.4\nStart Time:         Wed, 06 Jan 2021 18:18:52 +0000\nLabels:             app=redis\n                    role=master\nAnnotations:        \nStatus:             Running\nIP:                 10.244.1.92\nControlled By:      ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://b89dbaef5daed103d87916f6f79b4aaed2b03d029bc7bba4080d0dad13aff0a9\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 06 Jan 2021 18:18:55 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rskld (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-rskld:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-rskld\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                    Message\n  ----    ------     ----  ----                    -------\n  Normal  Scheduled  4s    default-scheduler       Successfully assigned e2e-tests-kubectl-qvsjg/redis-master-7nb4s to hunter-worker\n  Normal  Pulled     3s    kubelet, hunter-worker  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    1s    kubelet, hunter-worker  Created container\n  Normal  Started    1s    kubelet, hunter-worker  Started container\n"
Jan  6 18:18:56.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-qvsjg'
Jan  6 18:18:56.966: INFO: stderr: ""
Jan  6 18:18:56.966: INFO: stdout: "Name:         redis-master\nNamespace:    e2e-tests-kubectl-qvsjg\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  4s    replication-controller  Created pod: redis-master-7nb4s\n"
Jan  6 18:18:56.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-qvsjg'
Jan  6 18:18:57.080: INFO: stderr: ""
Jan  6 18:18:57.080: INFO: stdout: "Name:              redis-master\nNamespace:         e2e-tests-kubectl-qvsjg\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.111.12.35\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.1.92:6379\nSession Affinity:  None\nEvents:            \n"
Jan  6 18:18:57.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane'
Jan  6 18:18:57.214: INFO: stderr: ""
Jan  6 18:18:57.214: INFO: stdout: "Name:               hunter-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/hostname=hunter-control-plane\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Wed, 23 Sep 2020 08:23:59 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Wed, 06 Jan 2021 18:18:51 +0000   Wed, 23 Sep 2020 08:23:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Wed, 06 Jan 2021 18:18:51 +0000   Wed, 23 Sep 2020 08:23:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Wed, 06 Jan 2021 18:18:51 +0000   Wed, 23 Sep 2020 08:23:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Wed, 06 Jan 2021 18:18:51 +0000   Wed, 23 Sep 2020 08:25:09 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.2\n  Hostname:    hunter-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759868Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759868Ki\n pods:               110\nSystem Info:\n Machine ID:                 6614791733384c4d8bae24c8b66b3c48\n System UUID:                9c1f06d4-1710-4ae6-92c6-19051881852f\n Boot ID:                    b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version:             4.15.0-118-generic\n OS Image:                   Ubuntu Groovy Gorilla (development branch)\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.4.0\n Kubelet Version:            v1.13.12\n Kube-Proxy Version:         v1.13.12\nPodCIDR:                     10.244.0.0/24\nNon-terminated Pods:         (6 in total)\n  Namespace                  Name                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                            ------------  ----------  ---------------  -------------  ---\n  kube-system                etcd-hunter-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         105d\n  kube-system                kindnet-4ntk6                                   100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      105d\n  kube-system                kube-apiserver-hunter-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         105d\n  kube-system                kube-controller-manager-hunter-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         105d\n  kube-system                kube-proxy-hwckq                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         105d\n  kube-system                kube-scheduler-hunter-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         105d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests   Limits\n  --------           --------   ------\n  cpu                650m (4%)  100m (0%)\n  memory             50Mi (0%)  50Mi (0%)\n  ephemeral-storage  0 (0%)     0 (0%)\nEvents:              \n"
Jan  6 18:18:57.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-qvsjg'
Jan  6 18:18:57.354: INFO: stderr: ""
Jan  6 18:18:57.354: INFO: stdout: "Name:         e2e-tests-kubectl-qvsjg\nLabels:       e2e-framework=kubectl\n              e2e-run=c73021fa-5042-11eb-8655-0242ac110009\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:18:57.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qvsjg" for this suite.
Jan  6 18:19:19.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:19:19.566: INFO: namespace: e2e-tests-kubectl-qvsjg, resource: bindings, ignored listing per whitelist
Jan  6 18:19:19.583: INFO: namespace e2e-tests-kubectl-qvsjg deletion completed in 22.225296193s

• [SLOW TEST:29.905 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:19:19.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jan  6 18:19:21.530: INFO: Waiting up to 5m0s for pod "client-containers-b2495cdb-504b-11eb-8655-0242ac110009" in namespace "e2e-tests-containers-4244z" to be "success or failure"
Jan  6 18:19:21.576: INFO: Pod "client-containers-b2495cdb-504b-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 45.503307ms
Jan  6 18:19:23.579: INFO: Pod "client-containers-b2495cdb-504b-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049023995s
Jan  6 18:19:25.583: INFO: Pod "client-containers-b2495cdb-504b-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052697766s
Jan  6 18:19:27.587: INFO: Pod "client-containers-b2495cdb-504b-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057047937s
Jan  6 18:19:29.594: INFO: Pod "client-containers-b2495cdb-504b-11eb-8655-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 8.064070015s
Jan  6 18:19:31.597: INFO: Pod "client-containers-b2495cdb-504b-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067052529s
STEP: Saw pod success
Jan  6 18:19:31.597: INFO: Pod "client-containers-b2495cdb-504b-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:19:31.599: INFO: Trying to get logs from node hunter-worker2 pod client-containers-b2495cdb-504b-11eb-8655-0242ac110009 container test-container: 
STEP: delete the pod
Jan  6 18:19:31.623: INFO: Waiting for pod client-containers-b2495cdb-504b-11eb-8655-0242ac110009 to disappear
Jan  6 18:19:31.628: INFO: Pod client-containers-b2495cdb-504b-11eb-8655-0242ac110009 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:19:31.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-4244z" for this suite.
Jan  6 18:19:37.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:19:37.718: INFO: namespace: e2e-tests-containers-4244z, resource: bindings, ignored listing per whitelist
Jan  6 18:19:37.763: INFO: namespace e2e-tests-containers-4244z deletion completed in 6.131976313s

• [SLOW TEST:18.179 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:19:37.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-sfjrt
Jan  6 18:19:44.018: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-sfjrt
STEP: checking the pod's current state and verifying that restartCount is present
Jan  6 18:19:44.020: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:23:44.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-sfjrt" for this suite.
Jan  6 18:23:50.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:23:50.980: INFO: namespace: e2e-tests-container-probe-sfjrt, resource: bindings, ignored listing per whitelist
Jan  6 18:23:51.060: INFO: namespace e2e-tests-container-probe-sfjrt deletion completed in 6.113637972s

• [SLOW TEST:253.297 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:23:51.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  6 18:23:59.237: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 18:23:59.247: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 18:24:01.247: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 18:24:01.252: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 18:24:03.247: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 18:24:03.251: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 18:24:05.247: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 18:24:05.252: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 18:24:07.247: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 18:24:07.251: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 18:24:09.247: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 18:24:09.251: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 18:24:11.247: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 18:24:11.256: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 18:24:13.247: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 18:24:13.252: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 18:24:15.247: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 18:24:15.252: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 18:24:17.247: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 18:24:17.255: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 18:24:19.247: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 18:24:19.251: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 18:24:21.247: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 18:24:21.252: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 18:24:23.247: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 18:24:23.251: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 18:24:25.247: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 18:24:25.252: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:24:25.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-z8dfx" for this suite.
Jan  6 18:24:47.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:24:47.344: INFO: namespace: e2e-tests-container-lifecycle-hook-z8dfx, resource: bindings, ignored listing per whitelist
Jan  6 18:24:47.411: INFO: namespace e2e-tests-container-lifecycle-hook-z8dfx deletion completed in 22.154926661s

• [SLOW TEST:56.351 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:24:47.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  6 18:24:47.532: INFO: Waiting up to 5m0s for pod "pod-74f9ddac-504c-11eb-8655-0242ac110009" in namespace "e2e-tests-emptydir-ltvgf" to be "success or failure"
Jan  6 18:24:47.538: INFO: Pod "pod-74f9ddac-504c-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007286ms
Jan  6 18:24:49.611: INFO: Pod "pod-74f9ddac-504c-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079056237s
Jan  6 18:24:51.615: INFO: Pod "pod-74f9ddac-504c-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082964272s
STEP: Saw pod success
Jan  6 18:24:51.615: INFO: Pod "pod-74f9ddac-504c-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:24:51.618: INFO: Trying to get logs from node hunter-worker2 pod pod-74f9ddac-504c-11eb-8655-0242ac110009 container test-container: 
STEP: delete the pod
Jan  6 18:24:51.642: INFO: Waiting for pod pod-74f9ddac-504c-11eb-8655-0242ac110009 to disappear
Jan  6 18:24:51.689: INFO: Pod pod-74f9ddac-504c-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:24:51.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ltvgf" for this suite.
Jan  6 18:24:57.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:24:57.739: INFO: namespace: e2e-tests-emptydir-ltvgf, resource: bindings, ignored listing per whitelist
Jan  6 18:24:57.799: INFO: namespace e2e-tests-emptydir-ltvgf deletion completed in 6.105821009s

• [SLOW TEST:10.388 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:24:57.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  6 18:24:57.901: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b27a663-504c-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-p9m58" to be "success or failure"
Jan  6 18:24:57.905: INFO: Pod "downwardapi-volume-7b27a663-504c-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.412548ms
Jan  6 18:24:59.910: INFO: Pod "downwardapi-volume-7b27a663-504c-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008790585s
Jan  6 18:25:01.914: INFO: Pod "downwardapi-volume-7b27a663-504c-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013028608s
STEP: Saw pod success
Jan  6 18:25:01.914: INFO: Pod "downwardapi-volume-7b27a663-504c-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:25:01.918: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-7b27a663-504c-11eb-8655-0242ac110009 container client-container: 
STEP: delete the pod
Jan  6 18:25:01.976: INFO: Waiting for pod downwardapi-volume-7b27a663-504c-11eb-8655-0242ac110009 to disappear
Jan  6 18:25:02.001: INFO: Pod downwardapi-volume-7b27a663-504c-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:25:02.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-p9m58" for this suite.
Jan  6 18:25:08.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:25:08.029: INFO: namespace: e2e-tests-projected-p9m58, resource: bindings, ignored listing per whitelist
Jan  6 18:25:08.111: INFO: namespace e2e-tests-projected-p9m58 deletion completed in 6.105931469s

• [SLOW TEST:10.312 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:25:08.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-814e2e39-504c-11eb-8655-0242ac110009
STEP: Creating a pod to test consume secrets
Jan  6 18:25:08.231: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-81505209-504c-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-sj95r" to be "success or failure"
Jan  6 18:25:08.246: INFO: Pod "pod-projected-secrets-81505209-504c-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 14.830607ms
Jan  6 18:25:10.264: INFO: Pod "pod-projected-secrets-81505209-504c-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033313124s
Jan  6 18:25:12.269: INFO: Pod "pod-projected-secrets-81505209-504c-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037657249s
STEP: Saw pod success
Jan  6 18:25:12.269: INFO: Pod "pod-projected-secrets-81505209-504c-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:25:12.272: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-81505209-504c-11eb-8655-0242ac110009 container projected-secret-volume-test: 
STEP: delete the pod
Jan  6 18:25:12.308: INFO: Waiting for pod pod-projected-secrets-81505209-504c-11eb-8655-0242ac110009 to disappear
Jan  6 18:25:12.315: INFO: Pod pod-projected-secrets-81505209-504c-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:25:12.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-sj95r" for this suite.
Jan  6 18:25:18.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:25:18.360: INFO: namespace: e2e-tests-projected-sj95r, resource: bindings, ignored listing per whitelist
Jan  6 18:25:18.435: INFO: namespace e2e-tests-projected-sj95r deletion completed in 6.116167281s

• [SLOW TEST:10.324 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:25:18.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  6 18:25:23.112: INFO: Successfully updated pod "annotationupdate87784492-504c-11eb-8655-0242ac110009"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:25:27.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-plx69" for this suite.
Jan  6 18:25:49.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:25:49.311: INFO: namespace: e2e-tests-downward-api-plx69, resource: bindings, ignored listing per whitelist
Jan  6 18:25:49.313: INFO: namespace e2e-tests-downward-api-plx69 deletion completed in 22.127744584s

• [SLOW TEST:30.878 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:25:49.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-r7cjg
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  6 18:25:49.386: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  6 18:26:13.542: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.49:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-r7cjg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 18:26:13.542: INFO: >>> kubeConfig: /root/.kube/config
I0106 18:26:13.572019       6 log.go:172] (0xc001a44160) (0xc001ff4280) Create stream
I0106 18:26:13.572047       6 log.go:172] (0xc001a44160) (0xc001ff4280) Stream added, broadcasting: 1
I0106 18:26:13.574285       6 log.go:172] (0xc001a44160) Reply frame received for 1
I0106 18:26:13.574308       6 log.go:172] (0xc001a44160) (0xc001ff4320) Create stream
I0106 18:26:13.574318       6 log.go:172] (0xc001a44160) (0xc001ff4320) Stream added, broadcasting: 3
I0106 18:26:13.575113       6 log.go:172] (0xc001a44160) Reply frame received for 3
I0106 18:26:13.575144       6 log.go:172] (0xc001a44160) (0xc0025f0140) Create stream
I0106 18:26:13.575164       6 log.go:172] (0xc001a44160) (0xc0025f0140) Stream added, broadcasting: 5
I0106 18:26:13.576010       6 log.go:172] (0xc001a44160) Reply frame received for 5
I0106 18:26:13.646638       6 log.go:172] (0xc001a44160) Data frame received for 3
I0106 18:26:13.646667       6 log.go:172] (0xc001ff4320) (3) Data frame handling
I0106 18:26:13.646684       6 log.go:172] (0xc001ff4320) (3) Data frame sent
I0106 18:26:13.646693       6 log.go:172] (0xc001a44160) Data frame received for 3
I0106 18:26:13.646701       6 log.go:172] (0xc001ff4320) (3) Data frame handling
I0106 18:26:13.646881       6 log.go:172] (0xc001a44160) Data frame received for 5
I0106 18:26:13.646905       6 log.go:172] (0xc0025f0140) (5) Data frame handling
I0106 18:26:13.648244       6 log.go:172] (0xc001a44160) Data frame received for 1
I0106 18:26:13.648268       6 log.go:172] (0xc001ff4280) (1) Data frame handling
I0106 18:26:13.648281       6 log.go:172] (0xc001ff4280) (1) Data frame sent
I0106 18:26:13.648300       6 log.go:172] (0xc001a44160) (0xc001ff4280) Stream removed, broadcasting: 1
I0106 18:26:13.648322       6 log.go:172] (0xc001a44160) Go away received
I0106 18:26:13.648459       6 log.go:172] (0xc001a44160) (0xc001ff4280) Stream removed, broadcasting: 1
I0106 18:26:13.648485       6 log.go:172] (0xc001a44160) (0xc001ff4320) Stream removed, broadcasting: 3
I0106 18:26:13.648505       6 log.go:172] (0xc001a44160) (0xc0025f0140) Stream removed, broadcasting: 5
Jan  6 18:26:13.648: INFO: Found all expected endpoints: [netserver-0]
Jan  6 18:26:13.651: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.96:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-r7cjg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 18:26:13.651: INFO: >>> kubeConfig: /root/.kube/config
I0106 18:26:13.678399       6 log.go:172] (0xc001a44630) (0xc001ff4640) Create stream
I0106 18:26:13.678429       6 log.go:172] (0xc001a44630) (0xc001ff4640) Stream added, broadcasting: 1
I0106 18:26:13.680471       6 log.go:172] (0xc001a44630) Reply frame received for 1
I0106 18:26:13.680531       6 log.go:172] (0xc001a44630) (0xc0025f01e0) Create stream
I0106 18:26:13.680554       6 log.go:172] (0xc001a44630) (0xc0025f01e0) Stream added, broadcasting: 3
I0106 18:26:13.681557       6 log.go:172] (0xc001a44630) Reply frame received for 3
I0106 18:26:13.681594       6 log.go:172] (0xc001a44630) (0xc0025f0280) Create stream
I0106 18:26:13.681606       6 log.go:172] (0xc001a44630) (0xc0025f0280) Stream added, broadcasting: 5
I0106 18:26:13.682448       6 log.go:172] (0xc001a44630) Reply frame received for 5
I0106 18:26:13.758729       6 log.go:172] (0xc001a44630) Data frame received for 3
I0106 18:26:13.758799       6 log.go:172] (0xc0025f01e0) (3) Data frame handling
I0106 18:26:13.758829       6 log.go:172] (0xc0025f01e0) (3) Data frame sent
I0106 18:26:13.758843       6 log.go:172] (0xc001a44630) Data frame received for 3
I0106 18:26:13.758879       6 log.go:172] (0xc001a44630) Data frame received for 5
I0106 18:26:13.758925       6 log.go:172] (0xc0025f0280) (5) Data frame handling
I0106 18:26:13.758954       6 log.go:172] (0xc0025f01e0) (3) Data frame handling
I0106 18:26:13.760461       6 log.go:172] (0xc001a44630) Data frame received for 1
I0106 18:26:13.760482       6 log.go:172] (0xc001ff4640) (1) Data frame handling
I0106 18:26:13.760505       6 log.go:172] (0xc001ff4640) (1) Data frame sent
I0106 18:26:13.760529       6 log.go:172] (0xc001a44630) (0xc001ff4640) Stream removed, broadcasting: 1
I0106 18:26:13.760548       6 log.go:172] (0xc001a44630) Go away received
I0106 18:26:13.760674       6 log.go:172] (0xc001a44630) (0xc001ff4640) Stream removed, broadcasting: 1
I0106 18:26:13.760693       6 log.go:172] (0xc001a44630) (0xc0025f01e0) Stream removed, broadcasting: 3
I0106 18:26:13.760705       6 log.go:172] (0xc001a44630) (0xc0025f0280) Stream removed, broadcasting: 5
Jan  6 18:26:13.760: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:26:13.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-r7cjg" for this suite.
Jan  6 18:26:37.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:26:37.795: INFO: namespace: e2e-tests-pod-network-test-r7cjg, resource: bindings, ignored listing per whitelist
Jan  6 18:26:37.868: INFO: namespace e2e-tests-pod-network-test-r7cjg deletion completed in 24.103828361s

• [SLOW TEST:48.555 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:26:37.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-k2ptn
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-k2ptn
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-k2ptn
Jan  6 18:26:38.026: INFO: Found 0 stateful pods, waiting for 1
Jan  6 18:26:48.030: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan  6 18:26:48.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k2ptn ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  6 18:26:48.359: INFO: stderr: "I0106 18:26:48.200540    3298 log.go:172] (0xc000138790) (0xc0006fa640) Create stream\nI0106 18:26:48.200612    3298 log.go:172] (0xc000138790) (0xc0006fa640) Stream added, broadcasting: 1\nI0106 18:26:48.203666    3298 log.go:172] (0xc000138790) Reply frame received for 1\nI0106 18:26:48.203716    3298 log.go:172] (0xc000138790) (0xc0005a6dc0) Create stream\nI0106 18:26:48.203728    3298 log.go:172] (0xc000138790) (0xc0005a6dc0) Stream added, broadcasting: 3\nI0106 18:26:48.204718    3298 log.go:172] (0xc000138790) Reply frame received for 3\nI0106 18:26:48.204757    3298 log.go:172] (0xc000138790) (0xc0006fa6e0) Create stream\nI0106 18:26:48.204770    3298 log.go:172] (0xc000138790) (0xc0006fa6e0) Stream added, broadcasting: 5\nI0106 18:26:48.205886    3298 log.go:172] (0xc000138790) Reply frame received for 5\nI0106 18:26:48.352989    3298 log.go:172] (0xc000138790) Data frame received for 5\nI0106 18:26:48.353014    3298 log.go:172] (0xc0006fa6e0) (5) Data frame handling\nI0106 18:26:48.353049    3298 log.go:172] (0xc000138790) Data frame received for 3\nI0106 18:26:48.353091    3298 log.go:172] (0xc0005a6dc0) (3) Data frame handling\nI0106 18:26:48.353119    3298 log.go:172] (0xc0005a6dc0) (3) Data frame sent\nI0106 18:26:48.353141    3298 log.go:172] (0xc000138790) Data frame received for 3\nI0106 18:26:48.353153    3298 log.go:172] (0xc0005a6dc0) (3) Data frame handling\nI0106 18:26:48.354880    3298 log.go:172] (0xc000138790) Data frame received for 1\nI0106 18:26:48.354913    3298 log.go:172] (0xc0006fa640) (1) Data frame handling\nI0106 18:26:48.354935    3298 log.go:172] (0xc0006fa640) (1) Data frame sent\nI0106 18:26:48.354954    3298 log.go:172] (0xc000138790) (0xc0006fa640) Stream removed, broadcasting: 1\nI0106 18:26:48.354976    3298 log.go:172] (0xc000138790) Go away received\nI0106 18:26:48.355139    3298 log.go:172] (0xc000138790) (0xc0006fa640) Stream removed, broadcasting: 1\nI0106 18:26:48.355158    3298 log.go:172] (0xc000138790) (0xc0005a6dc0) Stream removed, broadcasting: 3\nI0106 18:26:48.355167    3298 log.go:172] (0xc000138790) (0xc0006fa6e0) Stream removed, broadcasting: 5\n"
Jan  6 18:26:48.359: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  6 18:26:48.359: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  6 18:26:48.364: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  6 18:26:58.386: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  6 18:26:58.386: INFO: Waiting for statefulset status.replicas updated to 0
Jan  6 18:26:58.405: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999496s
Jan  6 18:26:59.409: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990846426s
Jan  6 18:27:00.416: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.986448257s
Jan  6 18:27:01.420: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.979979496s
Jan  6 18:27:02.425: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.975817309s
Jan  6 18:27:03.429: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.970944803s
Jan  6 18:27:04.434: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.967063456s
Jan  6 18:27:05.437: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.962114755s
Jan  6 18:27:06.440: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.958680882s
Jan  6 18:27:07.444: INFO: Verifying statefulset ss doesn't scale past 1 for another 955.221426ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-k2ptn
Jan  6 18:27:08.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k2ptn ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:27:08.675: INFO: stderr: "I0106 18:27:08.579924    3321 log.go:172] (0xc00015c790) (0xc0005cd400) Create stream\nI0106 18:27:08.579979    3321 log.go:172] (0xc00015c790) (0xc0005cd400) Stream added, broadcasting: 1\nI0106 18:27:08.583050    3321 log.go:172] (0xc00015c790) Reply frame received for 1\nI0106 18:27:08.583097    3321 log.go:172] (0xc00015c790) (0xc0005cd4a0) Create stream\nI0106 18:27:08.583109    3321 log.go:172] (0xc00015c790) (0xc0005cd4a0) Stream added, broadcasting: 3\nI0106 18:27:08.584088    3321 log.go:172] (0xc00015c790) Reply frame received for 3\nI0106 18:27:08.584136    3321 log.go:172] (0xc00015c790) (0xc0000f0000) Create stream\nI0106 18:27:08.584149    3321 log.go:172] (0xc00015c790) (0xc0000f0000) Stream added, broadcasting: 5\nI0106 18:27:08.585078    3321 log.go:172] (0xc00015c790) Reply frame received for 5\nI0106 18:27:08.668825    3321 log.go:172] (0xc00015c790) Data frame received for 5\nI0106 18:27:08.668951    3321 log.go:172] (0xc0000f0000) (5) Data frame handling\nI0106 18:27:08.668980    3321 log.go:172] (0xc00015c790) Data frame received for 3\nI0106 18:27:08.668991    3321 log.go:172] (0xc0005cd4a0) (3) Data frame handling\nI0106 18:27:08.669009    3321 log.go:172] (0xc0005cd4a0) (3) Data frame sent\nI0106 18:27:08.669025    3321 log.go:172] (0xc00015c790) Data frame received for 3\nI0106 18:27:08.669031    3321 log.go:172] (0xc0005cd4a0) (3) Data frame handling\nI0106 18:27:08.670435    3321 log.go:172] (0xc00015c790) Data frame received for 1\nI0106 18:27:08.670449    3321 log.go:172] (0xc0005cd400) (1) Data frame handling\nI0106 18:27:08.670456    3321 log.go:172] (0xc0005cd400) (1) Data frame sent\nI0106 18:27:08.670465    3321 log.go:172] (0xc00015c790) (0xc0005cd400) Stream removed, broadcasting: 1\nI0106 18:27:08.670502    3321 log.go:172] (0xc00015c790) Go away received\nI0106 18:27:08.670592    3321 log.go:172] (0xc00015c790) (0xc0005cd400) Stream removed, broadcasting: 1\nI0106 18:27:08.670603    3321 log.go:172] (0xc00015c790) (0xc0005cd4a0) Stream removed, broadcasting: 3\nI0106 18:27:08.670609    3321 log.go:172] (0xc00015c790) (0xc0000f0000) Stream removed, broadcasting: 5\n"
Jan  6 18:27:08.676: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  6 18:27:08.676: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  6 18:27:08.680: INFO: Found 1 stateful pods, waiting for 3
Jan  6 18:27:18.685: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  6 18:27:18.685: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  6 18:27:18.685: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan  6 18:27:18.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k2ptn ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  6 18:27:18.892: INFO: stderr: "I0106 18:27:18.800977    3343 log.go:172] (0xc00034e2c0) (0xc0007625a0) Create stream\nI0106 18:27:18.801039    3343 log.go:172] (0xc00034e2c0) (0xc0007625a0) Stream added, broadcasting: 1\nI0106 18:27:18.803398    3343 log.go:172] (0xc00034e2c0) Reply frame received for 1\nI0106 18:27:18.803438    3343 log.go:172] (0xc00034e2c0) (0xc0005eab40) Create stream\nI0106 18:27:18.803449    3343 log.go:172] (0xc00034e2c0) (0xc0005eab40) Stream added, broadcasting: 3\nI0106 18:27:18.804109    3343 log.go:172] (0xc00034e2c0) Reply frame received for 3\nI0106 18:27:18.804142    3343 log.go:172] (0xc00034e2c0) (0xc000762640) Create stream\nI0106 18:27:18.804155    3343 log.go:172] (0xc00034e2c0) (0xc000762640) Stream added, broadcasting: 5\nI0106 18:27:18.804798    3343 log.go:172] (0xc00034e2c0) Reply frame received for 5\nI0106 18:27:18.884437    3343 log.go:172] (0xc00034e2c0) Data frame received for 5\nI0106 18:27:18.884480    3343 log.go:172] (0xc000762640) (5) Data frame handling\nI0106 18:27:18.884511    3343 log.go:172] (0xc00034e2c0) Data frame received for 3\nI0106 18:27:18.884529    3343 log.go:172] (0xc0005eab40) (3) Data frame handling\nI0106 18:27:18.884542    3343 log.go:172] (0xc0005eab40) (3) Data frame sent\nI0106 18:27:18.884553    3343 log.go:172] (0xc00034e2c0) Data frame received for 3\nI0106 18:27:18.884562    3343 log.go:172] (0xc0005eab40) (3) Data frame handling\nI0106 18:27:18.886554    3343 log.go:172] (0xc00034e2c0) Data frame received for 1\nI0106 18:27:18.886580    3343 log.go:172] (0xc0007625a0) (1) Data frame handling\nI0106 18:27:18.886592    3343 log.go:172] (0xc0007625a0) (1) Data frame sent\nI0106 18:27:18.886608    3343 log.go:172] (0xc00034e2c0) (0xc0007625a0) Stream removed, broadcasting: 1\nI0106 18:27:18.886626    3343 log.go:172] (0xc00034e2c0) Go away received\nI0106 18:27:18.886894    3343 log.go:172] (0xc00034e2c0) (0xc0007625a0) Stream removed, broadcasting: 1\nI0106 18:27:18.886926    3343 log.go:172] (0xc00034e2c0) (0xc0005eab40) Stream removed, broadcasting: 3\nI0106 18:27:18.886945    3343 log.go:172] (0xc00034e2c0) (0xc000762640) Stream removed, broadcasting: 5\n"
Jan  6 18:27:18.892: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  6 18:27:18.892: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  6 18:27:18.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k2ptn ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  6 18:27:19.126: INFO: stderr: "I0106 18:27:19.022783    3366 log.go:172] (0xc000138840) (0xc000697360) Create stream\nI0106 18:27:19.022855    3366 log.go:172] (0xc000138840) (0xc000697360) Stream added, broadcasting: 1\nI0106 18:27:19.025407    3366 log.go:172] (0xc000138840) Reply frame received for 1\nI0106 18:27:19.025446    3366 log.go:172] (0xc000138840) (0xc000697400) Create stream\nI0106 18:27:19.025457    3366 log.go:172] (0xc000138840) (0xc000697400) Stream added, broadcasting: 3\nI0106 18:27:19.026524    3366 log.go:172] (0xc000138840) Reply frame received for 3\nI0106 18:27:19.026566    3366 log.go:172] (0xc000138840) (0xc00077e000) Create stream\nI0106 18:27:19.026582    3366 log.go:172] (0xc000138840) (0xc00077e000) Stream added, broadcasting: 5\nI0106 18:27:19.027685    3366 log.go:172] (0xc000138840) Reply frame received for 5\nI0106 18:27:19.118383    3366 log.go:172] (0xc000138840) Data frame received for 3\nI0106 18:27:19.118427    3366 log.go:172] (0xc000697400) (3) Data frame handling\nI0106 18:27:19.118443    3366 log.go:172] (0xc000697400) (3) Data frame sent\nI0106 18:27:19.118692    3366 log.go:172] (0xc000138840) Data frame received for 5\nI0106 18:27:19.118762    3366 log.go:172] (0xc00077e000) (5) Data frame handling\nI0106 18:27:19.118806    3366 log.go:172] (0xc000138840) Data frame received for 3\nI0106 18:27:19.118832    3366 log.go:172] (0xc000697400) (3) Data frame handling\nI0106 18:27:19.121556    3366 log.go:172] (0xc000138840) Data frame received for 1\nI0106 18:27:19.121583    3366 log.go:172] (0xc000697360) (1) Data frame handling\nI0106 18:27:19.121604    3366 log.go:172] (0xc000697360) (1) Data frame sent\nI0106 18:27:19.121626    3366 log.go:172] (0xc000138840) (0xc000697360) Stream removed, broadcasting: 1\nI0106 18:27:19.121878    3366 log.go:172] (0xc000138840) (0xc000697360) Stream removed, broadcasting: 1\nI0106 18:27:19.121904    3366 log.go:172] (0xc000138840) (0xc000697400) Stream removed, broadcasting: 3\nI0106 18:27:19.121994    3366 log.go:172] (0xc000138840) Go away received\nI0106 18:27:19.122109    3366 log.go:172] (0xc000138840) (0xc00077e000) Stream removed, broadcasting: 5\n"
Jan  6 18:27:19.127: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  6 18:27:19.127: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  6 18:27:19.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k2ptn ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  6 18:27:19.386: INFO: stderr: "I0106 18:27:19.255175    3389 log.go:172] (0xc000154840) (0xc000695220) Create stream\nI0106 18:27:19.255229    3389 log.go:172] (0xc000154840) (0xc000695220) Stream added, broadcasting: 1\nI0106 18:27:19.258237    3389 log.go:172] (0xc000154840) Reply frame received for 1\nI0106 18:27:19.258294    3389 log.go:172] (0xc000154840) (0xc00078c000) Create stream\nI0106 18:27:19.258318    3389 log.go:172] (0xc000154840) (0xc00078c000) Stream added, broadcasting: 3\nI0106 18:27:19.259513    3389 log.go:172] (0xc000154840) Reply frame received for 3\nI0106 18:27:19.259574    3389 log.go:172] (0xc000154840) (0xc000686000) Create stream\nI0106 18:27:19.259593    3389 log.go:172] (0xc000154840) (0xc000686000) Stream added, broadcasting: 5\nI0106 18:27:19.260933    3389 log.go:172] (0xc000154840) Reply frame received for 5\nI0106 18:27:19.379794    3389 log.go:172] (0xc000154840) Data frame received for 5\nI0106 18:27:19.379832    3389 log.go:172] (0xc000686000) (5) Data frame handling\nI0106 18:27:19.379872    3389 log.go:172] (0xc000154840) Data frame received for 3\nI0106 18:27:19.379886    3389 log.go:172] (0xc00078c000) (3) Data frame handling\nI0106 18:27:19.379903    3389 log.go:172] (0xc00078c000) (3) Data frame sent\nI0106 18:27:19.380139    3389 log.go:172] (0xc000154840) Data frame received for 3\nI0106 18:27:19.380152    3389 log.go:172] (0xc00078c000) (3) Data frame handling\nI0106 18:27:19.381477    3389 log.go:172] (0xc000154840) Data frame received for 1\nI0106 18:27:19.381497    3389 log.go:172] (0xc000695220) (1) Data frame handling\nI0106 18:27:19.381508    3389 log.go:172] (0xc000695220) (1) Data frame sent\nI0106 18:27:19.381530    3389 log.go:172] (0xc000154840) (0xc000695220) Stream removed, broadcasting: 1\nI0106 18:27:19.381557    3389 log.go:172] (0xc000154840) Go away received\nI0106 18:27:19.381756    3389 log.go:172] (0xc000154840) (0xc000695220) Stream removed, broadcasting: 1\nI0106 18:27:19.381776    3389 log.go:172] (0xc000154840) (0xc00078c000) Stream removed, broadcasting: 3\nI0106 18:27:19.381785    3389 log.go:172] (0xc000154840) (0xc000686000) Stream removed, broadcasting: 5\n"
Jan  6 18:27:19.386: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  6 18:27:19.386: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  6 18:27:19.386: INFO: Waiting for statefulset status.replicas updated to 0
Jan  6 18:27:19.389: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan  6 18:27:29.397: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  6 18:27:29.397: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  6 18:27:29.397: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  6 18:27:29.412: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999526s
Jan  6 18:27:30.417: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992236263s
Jan  6 18:27:31.422: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986973454s
Jan  6 18:27:32.427: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981555108s
Jan  6 18:27:33.432: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.976914119s
Jan  6 18:27:34.437: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.971833891s
Jan  6 18:27:35.441: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.96721368s
Jan  6 18:27:36.445: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.963067617s
Jan  6 18:27:37.450: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.958604068s
Jan  6 18:27:38.455: INFO: Verifying statefulset ss doesn't scale past 3 for another 953.663591ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-k2ptn
Jan  6 18:27:39.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k2ptn ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:27:39.702: INFO: stderr: "I0106 18:27:39.595106    3411 log.go:172] (0xc0008302c0) (0xc00074c640) Create stream\nI0106 18:27:39.595167    3411 log.go:172] (0xc0008302c0) (0xc00074c640) Stream added, broadcasting: 1\nI0106 18:27:39.597772    3411 log.go:172] (0xc0008302c0) Reply frame received for 1\nI0106 18:27:39.597824    3411 log.go:172] (0xc0008302c0) (0xc000696dc0) Create stream\nI0106 18:27:39.597838    3411 log.go:172] (0xc0008302c0) (0xc000696dc0) Stream added, broadcasting: 3\nI0106 18:27:39.599001    3411 log.go:172] (0xc0008302c0) Reply frame received for 3\nI0106 18:27:39.599048    3411 log.go:172] (0xc0008302c0) (0xc00054e000) Create stream\nI0106 18:27:39.599064    3411 log.go:172] (0xc0008302c0) (0xc00054e000) Stream added, broadcasting: 5\nI0106 18:27:39.600059    3411 log.go:172] (0xc0008302c0) Reply frame received for 5\nI0106 18:27:39.695871    3411 log.go:172] (0xc0008302c0) Data frame received for 3\nI0106 18:27:39.695903    3411 log.go:172] (0xc000696dc0) (3) Data frame handling\nI0106 18:27:39.695934    3411 log.go:172] (0xc000696dc0) (3) Data frame sent\nI0106 18:27:39.695951    3411 log.go:172] (0xc0008302c0) Data frame received for 3\nI0106 18:27:39.695961    3411 log.go:172] (0xc000696dc0) (3) Data frame handling\nI0106 18:27:39.696323    3411 log.go:172] (0xc0008302c0) Data frame received for 5\nI0106 18:27:39.696342    3411 log.go:172] (0xc00054e000) (5) Data frame handling\nI0106 18:27:39.697870    3411 log.go:172] (0xc0008302c0) Data frame received for 1\nI0106 18:27:39.697890    3411 log.go:172] (0xc00074c640) (1) Data frame handling\nI0106 18:27:39.697901    3411 log.go:172] (0xc00074c640) (1) Data frame sent\nI0106 18:27:39.697915    3411 log.go:172] (0xc0008302c0) (0xc00074c640) Stream removed, broadcasting: 1\nI0106 18:27:39.697960    3411 log.go:172] (0xc0008302c0) Go away received\nI0106 18:27:39.698100    3411 log.go:172] (0xc0008302c0) (0xc00074c640) Stream removed, broadcasting: 1\nI0106 18:27:39.698118    3411 log.go:172] (0xc0008302c0) (0xc000696dc0) Stream removed, broadcasting: 3\nI0106 18:27:39.698129    3411 log.go:172] (0xc0008302c0) (0xc00054e000) Stream removed, broadcasting: 5\n"
Jan  6 18:27:39.702: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  6 18:27:39.702: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  6 18:27:39.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k2ptn ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:27:39.928: INFO: stderr: "I0106 18:27:39.825567    3433 log.go:172] (0xc000138580) (0xc000023540) Create stream\nI0106 18:27:39.825615    3433 log.go:172] (0xc000138580) (0xc000023540) Stream added, broadcasting: 1\nI0106 18:27:39.827175    3433 log.go:172] (0xc000138580) Reply frame received for 1\nI0106 18:27:39.827210    3433 log.go:172] (0xc000138580) (0xc0004d8500) Create stream\nI0106 18:27:39.827219    3433 log.go:172] (0xc000138580) (0xc0004d8500) Stream added, broadcasting: 3\nI0106 18:27:39.827792    3433 log.go:172] (0xc000138580) Reply frame received for 3\nI0106 18:27:39.827824    3433 log.go:172] (0xc000138580) (0xc0005fe000) Create stream\nI0106 18:27:39.827838    3433 log.go:172] (0xc000138580) (0xc0005fe000) Stream added, broadcasting: 5\nI0106 18:27:39.828444    3433 log.go:172] (0xc000138580) Reply frame received for 5\nI0106 18:27:39.922857    3433 log.go:172] (0xc000138580) Data frame received for 3\nI0106 18:27:39.922887    3433 log.go:172] (0xc0004d8500) (3) Data frame handling\nI0106 18:27:39.922902    3433 log.go:172] (0xc0004d8500) (3) Data frame sent\nI0106 18:27:39.922911    3433 log.go:172] (0xc000138580) Data frame received for 3\nI0106 18:27:39.922918    3433 log.go:172] (0xc0004d8500) (3) Data frame handling\nI0106 18:27:39.922944    3433 log.go:172] (0xc000138580) Data frame received for 5\nI0106 18:27:39.922949    3433 log.go:172] (0xc0005fe000) (5) Data frame handling\nI0106 18:27:39.923951    3433 log.go:172] (0xc000138580) Data frame received for 1\nI0106 18:27:39.923966    3433 log.go:172] (0xc000023540) (1) Data frame handling\nI0106 18:27:39.923975    3433 log.go:172] (0xc000023540) (1) Data frame sent\nI0106 18:27:39.923987    3433 log.go:172] (0xc000138580) (0xc000023540) Stream removed, broadcasting: 1\nI0106 18:27:39.924005    3433 log.go:172] (0xc000138580) Go away received\nI0106 18:27:39.924246    3433 log.go:172] (0xc000138580) (0xc000023540) Stream removed, broadcasting: 1\nI0106 18:27:39.924270    3433 log.go:172] (0xc000138580) (0xc0004d8500) Stream removed, broadcasting: 3\nI0106 18:27:39.924280    3433 log.go:172] (0xc000138580) (0xc0005fe000) Stream removed, broadcasting: 5\n"
Jan  6 18:27:39.928: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  6 18:27:39.928: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  6 18:27:39.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k2ptn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 18:27:40.162: INFO: stderr: "I0106 18:27:40.092421    3454 log.go:172] (0xc000780160) (0xc000672280) Create stream\nI0106 18:27:40.092478    3454 log.go:172] (0xc000780160) (0xc000672280) Stream added, broadcasting: 1\nI0106 18:27:40.094378    3454 log.go:172] (0xc000780160) Reply frame received for 1\nI0106 18:27:40.094430    3454 log.go:172] (0xc000780160) (0xc000024b40) Create stream\nI0106 18:27:40.094445    3454 log.go:172] (0xc000780160) (0xc000024b40) Stream added, broadcasting: 3\nI0106 18:27:40.095187    3454 log.go:172] (0xc000780160) Reply frame received for 3\nI0106 18:27:40.095217    3454 log.go:172] (0xc000780160) (0xc0004da000) Create stream\nI0106 18:27:40.095225    3454 log.go:172] (0xc000780160) (0xc0004da000) Stream added, broadcasting: 5\nI0106 18:27:40.095921    3454 log.go:172] (0xc000780160) Reply frame received for 5\nI0106 18:27:40.156680    3454 log.go:172] (0xc000780160) Data frame received for 5\nI0106 18:27:40.156715    3454 log.go:172] (0xc0004da000) (5) Data frame handling\nI0106 18:27:40.156741    3454 log.go:172] (0xc000780160) Data frame received for 3\nI0106 18:27:40.156754    3454 log.go:172] (0xc000024b40) (3) Data frame handling\nI0106 18:27:40.156764    3454 log.go:172] (0xc000024b40) (3) Data frame sent\nI0106 18:27:40.156771    3454 log.go:172] (0xc000780160) Data frame received for 3\nI0106 18:27:40.156775    3454 log.go:172] (0xc000024b40) (3) Data frame handling\nI0106 18:27:40.158108    3454 log.go:172] (0xc000780160) Data frame received for 1\nI0106 18:27:40.158158    3454 log.go:172] (0xc000672280) (1) Data frame handling\nI0106 18:27:40.158171    3454 log.go:172] (0xc000672280) (1) Data frame sent\nI0106 18:27:40.158183    3454 log.go:172] (0xc000780160) (0xc000672280) Stream removed, broadcasting: 1\nI0106 18:27:40.158200    3454 log.go:172] (0xc000780160) Go away received\nI0106 18:27:40.158387    3454 log.go:172] (0xc000780160) (0xc000672280) Stream removed, broadcasting: 1\nI0106 18:27:40.158400    3454 log.go:172] (0xc000780160) (0xc000024b40) Stream removed, broadcasting: 3\nI0106 18:27:40.158406    3454 log.go:172] (0xc000780160) (0xc0004da000) Stream removed, broadcasting: 5\n"
Jan  6 18:27:40.162: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  6 18:27:40.162: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  6 18:27:40.162: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  6 18:28:10.180: INFO: Deleting all statefulset in ns e2e-tests-statefulset-k2ptn
Jan  6 18:28:10.183: INFO: Scaling statefulset ss to 0
Jan  6 18:28:10.189: INFO: Waiting for statefulset status.replicas updated to 0
Jan  6 18:28:10.190: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:28:10.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-k2ptn" for this suite.
Jan  6 18:28:16.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:28:16.315: INFO: namespace: e2e-tests-statefulset-k2ptn, resource: bindings, ignored listing per whitelist
Jan  6 18:28:16.342: INFO: namespace e2e-tests-statefulset-k2ptn deletion completed in 6.098359048s

• [SLOW TEST:98.474 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:28:16.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  6 18:28:16.509: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f186b1bb-504c-11eb-8655-0242ac110009" in namespace "e2e-tests-downward-api-hnkqm" to be "success or failure"
Jan  6 18:28:16.512: INFO: Pod "downwardapi-volume-f186b1bb-504c-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.31276ms
Jan  6 18:28:18.517: INFO: Pod "downwardapi-volume-f186b1bb-504c-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007709628s
Jan  6 18:28:20.521: INFO: Pod "downwardapi-volume-f186b1bb-504c-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011752192s
STEP: Saw pod success
Jan  6 18:28:20.521: INFO: Pod "downwardapi-volume-f186b1bb-504c-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:28:20.523: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f186b1bb-504c-11eb-8655-0242ac110009 container client-container: 
STEP: delete the pod
Jan  6 18:28:20.652: INFO: Waiting for pod downwardapi-volume-f186b1bb-504c-11eb-8655-0242ac110009 to disappear
Jan  6 18:28:20.730: INFO: Pod downwardapi-volume-f186b1bb-504c-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:28:20.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hnkqm" for this suite.
Jan  6 18:28:26.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:28:26.978: INFO: namespace: e2e-tests-downward-api-hnkqm, resource: bindings, ignored listing per whitelist
Jan  6 18:28:26.982: INFO: namespace e2e-tests-downward-api-hnkqm deletion completed in 6.167389486s

• [SLOW TEST:10.640 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:28:26.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-6xr4b
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-6xr4b
STEP: Deleting pre-stop pod
Jan  6 18:28:40.191: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:28:40.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-6xr4b" for this suite.
Jan  6 18:29:18.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:29:18.258: INFO: namespace: e2e-tests-prestop-6xr4b, resource: bindings, ignored listing per whitelist
Jan  6 18:29:18.354: INFO: namespace e2e-tests-prestop-6xr4b deletion completed in 38.133653381s

• [SLOW TEST:51.372 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:29:18.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  6 18:29:18.459: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:29:26.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-kr8r9" for this suite.
Jan  6 18:29:32.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:29:32.467: INFO: namespace: e2e-tests-init-container-kr8r9, resource: bindings, ignored listing per whitelist
Jan  6 18:29:32.530: INFO: namespace e2e-tests-init-container-kr8r9 deletion completed in 6.097165365s

• [SLOW TEST:14.175 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:29:32.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:29:39.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-87lmn" for this suite.
Jan  6 18:30:01.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:30:01.788: INFO: namespace: e2e-tests-replication-controller-87lmn, resource: bindings, ignored listing per whitelist
Jan  6 18:30:01.801: INFO: namespace e2e-tests-replication-controller-87lmn deletion completed in 22.098133006s

• [SLOW TEST:29.270 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:30:01.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  6 18:30:01.920: INFO: Waiting up to 5m0s for pod "downwardapi-volume-305d1bf4-504d-11eb-8655-0242ac110009" in namespace "e2e-tests-downward-api-ltr9r" to be "success or failure"
Jan  6 18:30:01.924: INFO: Pod "downwardapi-volume-305d1bf4-504d-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.374043ms
Jan  6 18:30:04.641: INFO: Pod "downwardapi-volume-305d1bf4-504d-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.72024787s
Jan  6 18:30:06.645: INFO: Pod "downwardapi-volume-305d1bf4-504d-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.724340086s
STEP: Saw pod success
Jan  6 18:30:06.645: INFO: Pod "downwardapi-volume-305d1bf4-504d-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:30:06.648: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-305d1bf4-504d-11eb-8655-0242ac110009 container client-container: 
STEP: delete the pod
Jan  6 18:30:06.702: INFO: Waiting for pod downwardapi-volume-305d1bf4-504d-11eb-8655-0242ac110009 to disappear
Jan  6 18:30:06.708: INFO: Pod downwardapi-volume-305d1bf4-504d-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:30:06.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ltr9r" for this suite.
Jan  6 18:30:12.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:30:12.733: INFO: namespace: e2e-tests-downward-api-ltr9r, resource: bindings, ignored listing per whitelist
Jan  6 18:30:12.823: INFO: namespace e2e-tests-downward-api-ltr9r deletion completed in 6.111509163s

• [SLOW TEST:11.022 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:30:12.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan  6 18:30:13.000: INFO: Pod name pod-release: Found 0 pods out of 1
Jan  6 18:30:18.004: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:30:19.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-qt4rz" for this suite.
Jan  6 18:30:25.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:30:25.219: INFO: namespace: e2e-tests-replication-controller-qt4rz, resource: bindings, ignored listing per whitelist
Jan  6 18:30:25.234: INFO: namespace e2e-tests-replication-controller-qt4rz deletion completed in 6.207875843s

• [SLOW TEST:12.411 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:30:25.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jan  6 18:30:25.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan  6 18:30:25.576: INFO: stderr: ""
Jan  6 18:30:25.576: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npingcap.com/v1alpha1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:30:25.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-sxklx" for this suite.
Jan  6 18:30:31.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:30:31.673: INFO: namespace: e2e-tests-kubectl-sxklx, resource: bindings, ignored listing per whitelist
Jan  6 18:30:31.695: INFO: namespace e2e-tests-kubectl-sxklx deletion completed in 6.113950157s

• [SLOW TEST:6.461 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:30:31.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  6 18:30:31.800: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:30:38.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-4w4g4" for this suite.
Jan  6 18:30:44.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:30:44.331: INFO: namespace: e2e-tests-init-container-4w4g4, resource: bindings, ignored listing per whitelist
Jan  6 18:30:44.402: INFO: namespace e2e-tests-init-container-4w4g4 deletion completed in 6.131152008s

• [SLOW TEST:12.707 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:30:44.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-49c0fb61-504d-11eb-8655-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  6 18:30:44.546: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-49c6b4ed-504d-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-njppt" to be "success or failure"
Jan  6 18:30:44.552: INFO: Pod "pod-projected-configmaps-49c6b4ed-504d-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.263977ms
Jan  6 18:30:46.556: INFO: Pod "pod-projected-configmaps-49c6b4ed-504d-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010445894s
Jan  6 18:30:48.561: INFO: Pod "pod-projected-configmaps-49c6b4ed-504d-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014722978s
STEP: Saw pod success
Jan  6 18:30:48.561: INFO: Pod "pod-projected-configmaps-49c6b4ed-504d-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:30:48.564: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-49c6b4ed-504d-11eb-8655-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  6 18:30:48.631: INFO: Waiting for pod pod-projected-configmaps-49c6b4ed-504d-11eb-8655-0242ac110009 to disappear
Jan  6 18:30:48.755: INFO: Pod pod-projected-configmaps-49c6b4ed-504d-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:30:48.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-njppt" for this suite.
Jan  6 18:30:54.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:30:54.822: INFO: namespace: e2e-tests-projected-njppt, resource: bindings, ignored listing per whitelist
Jan  6 18:30:54.866: INFO: namespace e2e-tests-projected-njppt deletion completed in 6.107548228s

• [SLOW TEST:10.463 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:30:54.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jan  6 18:30:55.481: INFO: Waiting up to 5m0s for pod "pod-service-account-504b8573-504d-11eb-8655-0242ac110009-qxstk" in namespace "e2e-tests-svcaccounts-94mdn" to be "success or failure"
Jan  6 18:30:55.483: INFO: Pod "pod-service-account-504b8573-504d-11eb-8655-0242ac110009-qxstk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164314ms
Jan  6 18:30:57.488: INFO: Pod "pod-service-account-504b8573-504d-11eb-8655-0242ac110009-qxstk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006691376s
Jan  6 18:30:59.509: INFO: Pod "pod-service-account-504b8573-504d-11eb-8655-0242ac110009-qxstk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028125338s
Jan  6 18:31:01.513: INFO: Pod "pod-service-account-504b8573-504d-11eb-8655-0242ac110009-qxstk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031773447s
Jan  6 18:31:03.517: INFO: Pod "pod-service-account-504b8573-504d-11eb-8655-0242ac110009-qxstk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.035646323s
STEP: Saw pod success
Jan  6 18:31:03.517: INFO: Pod "pod-service-account-504b8573-504d-11eb-8655-0242ac110009-qxstk" satisfied condition "success or failure"
Jan  6 18:31:03.520: INFO: Trying to get logs from node hunter-worker pod pod-service-account-504b8573-504d-11eb-8655-0242ac110009-qxstk container token-test: 
STEP: delete the pod
Jan  6 18:31:03.553: INFO: Waiting for pod pod-service-account-504b8573-504d-11eb-8655-0242ac110009-qxstk to disappear
Jan  6 18:31:03.564: INFO: Pod pod-service-account-504b8573-504d-11eb-8655-0242ac110009-qxstk no longer exists
STEP: Creating a pod to test consume service account root CA
Jan  6 18:31:03.568: INFO: Waiting up to 5m0s for pod "pod-service-account-504b8573-504d-11eb-8655-0242ac110009-s2vlc" in namespace "e2e-tests-svcaccounts-94mdn" to be "success or failure"
Jan  6 18:31:03.577: INFO: Pod "pod-service-account-504b8573-504d-11eb-8655-0242ac110009-s2vlc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.963127ms
Jan  6 18:31:05.580: INFO: Pod "pod-service-account-504b8573-504d-11eb-8655-0242ac110009-s2vlc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012499935s
Jan  6 18:31:07.593: INFO: Pod "pod-service-account-504b8573-504d-11eb-8655-0242ac110009-s2vlc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025187224s
Jan  6 18:31:09.623: INFO: Pod "pod-service-account-504b8573-504d-11eb-8655-0242ac110009-s2vlc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055409504s
Jan  6 18:31:11.641: INFO: Pod "pod-service-account-504b8573-504d-11eb-8655-0242ac110009-s2vlc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073468821s
STEP: Saw pod success
Jan  6 18:31:11.641: INFO: Pod "pod-service-account-504b8573-504d-11eb-8655-0242ac110009-s2vlc" satisfied condition "success or failure"
Jan  6 18:31:11.644: INFO: Trying to get logs from node hunter-worker pod pod-service-account-504b8573-504d-11eb-8655-0242ac110009-s2vlc container root-ca-test: 
STEP: delete the pod
Jan  6 18:31:11.679: INFO: Waiting for pod pod-service-account-504b8573-504d-11eb-8655-0242ac110009-s2vlc to disappear
Jan  6 18:31:11.691: INFO: Pod pod-service-account-504b8573-504d-11eb-8655-0242ac110009-s2vlc no longer exists
STEP: Creating a pod to test consume service account namespace
Jan  6 18:31:11.694: INFO: Waiting up to 5m0s for pod "pod-service-account-504b8573-504d-11eb-8655-0242ac110009-xzms4" in namespace "e2e-tests-svcaccounts-94mdn" to be "success or failure"
Jan  6 18:31:11.708: INFO: Pod "pod-service-account-504b8573-504d-11eb-8655-0242ac110009-xzms4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.143913ms
Jan  6 18:31:13.712: INFO: Pod "pod-service-account-504b8573-504d-11eb-8655-0242ac110009-xzms4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017377476s
Jan  6 18:31:15.715: INFO: Pod "pod-service-account-504b8573-504d-11eb-8655-0242ac110009-xzms4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020574706s
Jan  6 18:31:17.756: INFO: Pod "pod-service-account-504b8573-504d-11eb-8655-0242ac110009-xzms4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060987926s
STEP: Saw pod success
Jan  6 18:31:17.756: INFO: Pod "pod-service-account-504b8573-504d-11eb-8655-0242ac110009-xzms4" satisfied condition "success or failure"
Jan  6 18:31:17.759: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-504b8573-504d-11eb-8655-0242ac110009-xzms4 container namespace-test: 
STEP: delete the pod
Jan  6 18:31:17.814: INFO: Waiting for pod pod-service-account-504b8573-504d-11eb-8655-0242ac110009-xzms4 to disappear
Jan  6 18:31:17.830: INFO: Pod pod-service-account-504b8573-504d-11eb-8655-0242ac110009-xzms4 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:31:17.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-94mdn" for this suite.
Jan  6 18:31:23.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:31:23.890: INFO: namespace: e2e-tests-svcaccounts-94mdn, resource: bindings, ignored listing per whitelist
Jan  6 18:31:23.965: INFO: namespace e2e-tests-svcaccounts-94mdn deletion completed in 6.131495629s

• [SLOW TEST:29.099 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:31:23.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  6 18:31:28.655: INFO: Successfully updated pod "labelsupdate615adbfb-504d-11eb-8655-0242ac110009"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:31:30.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vh88d" for this suite.
Jan  6 18:31:52.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:31:52.949: INFO: namespace: e2e-tests-downward-api-vh88d, resource: bindings, ignored listing per whitelist
Jan  6 18:31:52.965: INFO: namespace e2e-tests-downward-api-vh88d deletion completed in 22.106732452s

• [SLOW TEST:29.000 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:31:52.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0106 18:32:03.116527       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  6 18:32:03.116: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:32:03.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-d7wgz" for this suite.
Jan  6 18:32:09.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:32:09.200: INFO: namespace: e2e-tests-gc-d7wgz, resource: bindings, ignored listing per whitelist
Jan  6 18:32:09.234: INFO: namespace e2e-tests-gc-d7wgz deletion completed in 6.114280705s

• [SLOW TEST:16.268 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:32:09.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  6 18:32:09.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-k5chr'
Jan  6 18:32:11.958: INFO: stderr: ""
Jan  6 18:32:11.958: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan  6 18:32:17.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-k5chr -o json'
Jan  6 18:32:17.109: INFO: stderr: ""
Jan  6 18:32:17.109: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2021-01-06T18:32:11Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-k5chr\",\n        \"resourceVersion\": \"18063947\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-k5chr/pods/e2e-test-nginx-pod\",\n        \"uid\": \"7ddf3d3f-504d-11eb-8302-0242ac120002\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-8wh9q\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-8wh9q\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-8wh9q\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-01-06T18:32:12Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-01-06T18:32:15Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-01-06T18:32:15Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-01-06T18:32:11Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://e513d67579fd3fd9ba0aa8695c34f8222f41b4b522555c8d1b8f030dbe5c34d5\",\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2021-01-06T18:32:15Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.3\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.59\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2021-01-06T18:32:12Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan  6 18:32:17.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-k5chr'
Jan  6 18:32:17.377: INFO: stderr: ""
Jan  6 18:32:17.377: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Jan  6 18:32:17.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-k5chr'
Jan  6 18:32:21.572: INFO: stderr: ""
Jan  6 18:32:21.572: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:32:21.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-k5chr" for this suite.
Jan  6 18:32:27.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:32:27.628: INFO: namespace: e2e-tests-kubectl-k5chr, resource: bindings, ignored listing per whitelist
Jan  6 18:32:27.678: INFO: namespace e2e-tests-kubectl-k5chr deletion completed in 6.101991607s

• [SLOW TEST:18.444 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:32:27.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  6 18:32:27.887: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  6 18:32:27.891: INFO: Number of nodes with available pods: 0
Jan  6 18:32:27.891: INFO: Node hunter-worker is running more than one daemon pod
Jan  6 18:32:28.913: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  6 18:32:28.916: INFO: Number of nodes with available pods: 0
Jan  6 18:32:28.916: INFO: Node hunter-worker is running more than one daemon pod
Jan  6 18:32:29.995: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  6 18:32:30.131: INFO: Number of nodes with available pods: 0
Jan  6 18:32:30.131: INFO: Node hunter-worker is running more than one daemon pod
Jan  6 18:32:30.895: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  6 18:32:30.898: INFO: Number of nodes with available pods: 0
Jan  6 18:32:30.898: INFO: Node hunter-worker is running more than one daemon pod
Jan  6 18:32:31.973: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  6 18:32:31.978: INFO: Number of nodes with available pods: 0
Jan  6 18:32:31.978: INFO: Node hunter-worker is running more than one daemon pod
Jan  6 18:32:32.895: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  6 18:32:32.899: INFO: Number of nodes with available pods: 1
Jan  6 18:32:32.899: INFO: Node hunter-worker is running more than one daemon pod
Jan  6 18:32:33.896: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  6 18:32:33.899: INFO: Number of nodes with available pods: 2
Jan  6 18:32:33.899: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan  6 18:32:33.916: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  6 18:32:33.918: INFO: Number of nodes with available pods: 1
Jan  6 18:32:33.918: INFO: Node hunter-worker is running more than one daemon pod
Jan  6 18:32:35.099: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  6 18:32:35.103: INFO: Number of nodes with available pods: 1
Jan  6 18:32:35.103: INFO: Node hunter-worker is running more than one daemon pod
Jan  6 18:32:35.924: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  6 18:32:35.928: INFO: Number of nodes with available pods: 1
Jan  6 18:32:35.928: INFO: Node hunter-worker is running more than one daemon pod
Jan  6 18:32:37.003: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  6 18:32:37.006: INFO: Number of nodes with available pods: 1
Jan  6 18:32:37.006: INFO: Node hunter-worker is running more than one daemon pod
Jan  6 18:32:37.923: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  6 18:32:37.954: INFO: Number of nodes with available pods: 1
Jan  6 18:32:37.954: INFO: Node hunter-worker is running more than one daemon pod
Jan  6 18:32:38.934: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  6 18:32:38.962: INFO: Number of nodes with available pods: 1
Jan  6 18:32:38.962: INFO: Node hunter-worker is running more than one daemon pod
Jan  6 18:32:39.923: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  6 18:32:39.926: INFO: Number of nodes with available pods: 1
Jan  6 18:32:39.926: INFO: Node hunter-worker is running more than one daemon pod
Jan  6 18:32:40.923: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  6 18:32:40.926: INFO: Number of nodes with available pods: 1
Jan  6 18:32:40.926: INFO: Node hunter-worker is running more than one daemon pod
Jan  6 18:32:41.923: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  6 18:32:41.925: INFO: Number of nodes with available pods: 1
Jan  6 18:32:41.926: INFO: Node hunter-worker is running more than one daemon pod
Jan  6 18:32:42.931: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  6 18:32:42.934: INFO: Number of nodes with available pods: 2
Jan  6 18:32:42.934: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-rzchq, will wait for the garbage collector to delete the pods
Jan  6 18:32:42.994: INFO: Deleting DaemonSet.extensions daemon-set took: 5.695504ms
Jan  6 18:32:43.095: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.232228ms
Jan  6 18:32:46.897: INFO: Number of nodes with available pods: 0
Jan  6 18:32:46.897: INFO: Number of running nodes: 0, number of available pods: 0
Jan  6 18:32:46.899: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-rzchq/daemonsets","resourceVersion":"18064086"},"items":null}

Jan  6 18:32:46.901: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-rzchq/pods","resourceVersion":"18064086"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:32:46.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-rzchq" for this suite.
Jan  6 18:32:52.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:32:53.021: INFO: namespace: e2e-tests-daemonsets-rzchq, resource: bindings, ignored listing per whitelist
Jan  6 18:32:53.042: INFO: namespace e2e-tests-daemonsets-rzchq deletion completed in 6.130421595s

• [SLOW TEST:25.364 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:32:53.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:33:28.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-mrjww" for this suite.
Jan  6 18:33:34.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:33:34.942: INFO: namespace: e2e-tests-container-runtime-mrjww, resource: bindings, ignored listing per whitelist
Jan  6 18:33:35.007: INFO: namespace e2e-tests-container-runtime-mrjww deletion completed in 6.127502715s

• [SLOW TEST:41.965 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:33:35.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  6 18:33:35.159: INFO: Waiting up to 5m0s for pod "pod-af7876ff-504d-11eb-8655-0242ac110009" in namespace "e2e-tests-emptydir-brnnd" to be "success or failure"
Jan  6 18:33:35.180: INFO: Pod "pod-af7876ff-504d-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 21.638802ms
Jan  6 18:33:37.185: INFO: Pod "pod-af7876ff-504d-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025902538s
Jan  6 18:33:39.188: INFO: Pod "pod-af7876ff-504d-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029428796s
STEP: Saw pod success
Jan  6 18:33:39.188: INFO: Pod "pod-af7876ff-504d-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:33:39.191: INFO: Trying to get logs from node hunter-worker2 pod pod-af7876ff-504d-11eb-8655-0242ac110009 container test-container: 
STEP: delete the pod
Jan  6 18:33:39.228: INFO: Waiting for pod pod-af7876ff-504d-11eb-8655-0242ac110009 to disappear
Jan  6 18:33:39.234: INFO: Pod pod-af7876ff-504d-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:33:39.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-brnnd" for this suite.
Jan  6 18:33:45.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:33:45.255: INFO: namespace: e2e-tests-emptydir-brnnd, resource: bindings, ignored listing per whitelist
Jan  6 18:33:45.330: INFO: namespace e2e-tests-emptydir-brnnd deletion completed in 6.092968735s

• [SLOW TEST:10.323 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:33:45.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan  6 18:33:52.483: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:33:53.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-lfbpg" for this suite.
Jan  6 18:34:15.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:34:15.604: INFO: namespace: e2e-tests-replicaset-lfbpg, resource: bindings, ignored listing per whitelist
Jan  6 18:34:15.626: INFO: namespace e2e-tests-replicaset-lfbpg deletion completed in 22.122019326s

• [SLOW TEST:30.295 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:34:15.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-c7a5ed04-504d-11eb-8655-0242ac110009
STEP: Creating a pod to test consume secrets
Jan  6 18:34:15.765: INFO: Waiting up to 5m0s for pod "pod-secrets-c7ac4ece-504d-11eb-8655-0242ac110009" in namespace "e2e-tests-secrets-hbpgk" to be "success or failure"
Jan  6 18:34:15.767: INFO: Pod "pod-secrets-c7ac4ece-504d-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.634798ms
Jan  6 18:34:17.771: INFO: Pod "pod-secrets-c7ac4ece-504d-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006648588s
Jan  6 18:34:19.775: INFO: Pod "pod-secrets-c7ac4ece-504d-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009971524s
STEP: Saw pod success
Jan  6 18:34:19.775: INFO: Pod "pod-secrets-c7ac4ece-504d-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:34:19.777: INFO: Trying to get logs from node hunter-worker pod pod-secrets-c7ac4ece-504d-11eb-8655-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Jan  6 18:34:19.814: INFO: Waiting for pod pod-secrets-c7ac4ece-504d-11eb-8655-0242ac110009 to disappear
Jan  6 18:34:19.828: INFO: Pod pod-secrets-c7ac4ece-504d-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:34:19.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-hbpgk" for this suite.
Jan  6 18:34:25.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:34:25.925: INFO: namespace: e2e-tests-secrets-hbpgk, resource: bindings, ignored listing per whitelist
Jan  6 18:34:25.957: INFO: namespace e2e-tests-secrets-hbpgk deletion completed in 6.125764614s

• [SLOW TEST:10.331 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:34:25.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0106 18:34:27.143871       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  6 18:34:27.143: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:34:27.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-kgwp2" for this suite.
Jan  6 18:34:33.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:34:33.226: INFO: namespace: e2e-tests-gc-kgwp2, resource: bindings, ignored listing per whitelist
Jan  6 18:34:33.278: INFO: namespace e2e-tests-gc-kgwp2 deletion completed in 6.131032061s

• [SLOW TEST:7.321 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:34:33.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  6 18:34:33.338: INFO: PodSpec: initContainers in spec.initContainers
Jan  6 18:35:22.256: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d226b23a-504d-11eb-8655-0242ac110009", GenerateName:"", Namespace:"e2e-tests-init-container-lcnjk", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-lcnjk/pods/pod-init-d226b23a-504d-11eb-8655-0242ac110009", UID:"d2275ae9-504d-11eb-8302-0242ac120002", ResourceVersion:"18064636", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63745554873, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"338740858"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-pjjqx", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001971680), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pjjqx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pjjqx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pjjqx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001369798), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001987080), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001369890)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0013698b0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0013698b8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0013698bc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745554873, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745554873, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745554873, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745554873, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"10.244.1.115", StartTime:(*v1.Time)(0xc00256dfa0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002730070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0027300e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://cf369c57989dc16b330a8cd7f83d9f841571d24d4f87e1eeb9a2afd61802f290"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00256dfe0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00256dfc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:35:22.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-lcnjk" for this suite.
Jan  6 18:35:44.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:35:44.361: INFO: namespace: e2e-tests-init-container-lcnjk, resource: bindings, ignored listing per whitelist
Jan  6 18:35:44.423: INFO: namespace e2e-tests-init-container-lcnjk deletion completed in 22.111587467s

• [SLOW TEST:71.144 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:35:44.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-fc9480c7-504d-11eb-8655-0242ac110009
STEP: Creating a pod to test consume secrets
Jan  6 18:35:44.532: INFO: Waiting up to 5m0s for pod "pod-secrets-fc94ffa4-504d-11eb-8655-0242ac110009" in namespace "e2e-tests-secrets-5qdqt" to be "success or failure"
Jan  6 18:35:44.604: INFO: Pod "pod-secrets-fc94ffa4-504d-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 72.038976ms
Jan  6 18:35:46.608: INFO: Pod "pod-secrets-fc94ffa4-504d-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075840159s
Jan  6 18:35:48.612: INFO: Pod "pod-secrets-fc94ffa4-504d-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080151167s
STEP: Saw pod success
Jan  6 18:35:48.612: INFO: Pod "pod-secrets-fc94ffa4-504d-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:35:48.615: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-fc94ffa4-504d-11eb-8655-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Jan  6 18:35:48.640: INFO: Waiting for pod pod-secrets-fc94ffa4-504d-11eb-8655-0242ac110009 to disappear
Jan  6 18:35:48.644: INFO: Pod pod-secrets-fc94ffa4-504d-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:35:48.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-5qdqt" for this suite.
Jan  6 18:35:54.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:35:54.727: INFO: namespace: e2e-tests-secrets-5qdqt, resource: bindings, ignored listing per whitelist
Jan  6 18:35:54.765: INFO: namespace e2e-tests-secrets-5qdqt deletion completed in 6.117417707s

• [SLOW TEST:10.342 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:35:54.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0106 18:36:25.396571       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  6 18:36:25.396: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:36:25.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-2zmzb" for this suite.
Jan  6 18:36:33.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:36:33.451: INFO: namespace: e2e-tests-gc-2zmzb, resource: bindings, ignored listing per whitelist
Jan  6 18:36:33.529: INFO: namespace e2e-tests-gc-2zmzb deletion completed in 8.128609896s

• [SLOW TEST:38.764 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:36:33.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-19d89a4c-504e-11eb-8655-0242ac110009
STEP: Creating configMap with name cm-test-opt-upd-19d89af0-504e-11eb-8655-0242ac110009
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-19d89a4c-504e-11eb-8655-0242ac110009
STEP: Updating configmap cm-test-opt-upd-19d89af0-504e-11eb-8655-0242ac110009
STEP: Creating configMap with name cm-test-opt-create-19d89b28-504e-11eb-8655-0242ac110009
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:36:41.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4ddkw" for this suite.
Jan  6 18:37:05.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:37:05.786: INFO: namespace: e2e-tests-projected-4ddkw, resource: bindings, ignored listing per whitelist
Jan  6 18:37:05.853: INFO: namespace e2e-tests-projected-4ddkw deletion completed in 24.12452683s

• [SLOW TEST:32.324 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:37:05.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-2d1d4214-504e-11eb-8655-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  6 18:37:05.984: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2d1f6568-504e-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-rmxtv" to be "success or failure"
Jan  6 18:37:06.006: INFO: Pod "pod-projected-configmaps-2d1f6568-504e-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 22.192584ms
Jan  6 18:37:08.010: INFO: Pod "pod-projected-configmaps-2d1f6568-504e-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026407464s
Jan  6 18:37:10.024: INFO: Pod "pod-projected-configmaps-2d1f6568-504e-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04044508s
Jan  6 18:37:12.030: INFO: Pod "pod-projected-configmaps-2d1f6568-504e-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046298976s
STEP: Saw pod success
Jan  6 18:37:12.030: INFO: Pod "pod-projected-configmaps-2d1f6568-504e-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:37:12.033: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-2d1f6568-504e-11eb-8655-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  6 18:37:12.050: INFO: Waiting for pod pod-projected-configmaps-2d1f6568-504e-11eb-8655-0242ac110009 to disappear
Jan  6 18:37:12.054: INFO: Pod pod-projected-configmaps-2d1f6568-504e-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:37:12.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rmxtv" for this suite.
Jan  6 18:37:18.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:37:18.141: INFO: namespace: e2e-tests-projected-rmxtv, resource: bindings, ignored listing per whitelist
Jan  6 18:37:18.172: INFO: namespace e2e-tests-projected-rmxtv deletion completed in 6.114754351s

• [SLOW TEST:12.319 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:37:18.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-knhpg
Jan  6 18:37:22.669: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-knhpg
STEP: checking the pod's current state and verifying that restartCount is present
Jan  6 18:37:22.672: INFO: Initial restart count of pod liveness-http is 0
Jan  6 18:37:44.718: INFO: Restart count of pod e2e-tests-container-probe-knhpg/liveness-http is now 1 (22.045850084s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:37:44.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-knhpg" for this suite.
Jan  6 18:37:50.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:37:50.810: INFO: namespace: e2e-tests-container-probe-knhpg, resource: bindings, ignored listing per whitelist
Jan  6 18:37:50.849: INFO: namespace e2e-tests-container-probe-knhpg deletion completed in 6.111067324s

• [SLOW TEST:32.677 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:37:50.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-47f28263-504e-11eb-8655-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  6 18:37:50.998: INFO: Waiting up to 5m0s for pod "pod-configmaps-47f5f530-504e-11eb-8655-0242ac110009" in namespace "e2e-tests-configmap-pkggd" to be "success or failure"
Jan  6 18:37:51.014: INFO: Pod "pod-configmaps-47f5f530-504e-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 15.669342ms
Jan  6 18:37:53.098: INFO: Pod "pod-configmaps-47f5f530-504e-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099938163s
Jan  6 18:37:55.109: INFO: Pod "pod-configmaps-47f5f530-504e-11eb-8655-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.11105431s
Jan  6 18:37:57.112: INFO: Pod "pod-configmaps-47f5f530-504e-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.114472358s
STEP: Saw pod success
Jan  6 18:37:57.112: INFO: Pod "pod-configmaps-47f5f530-504e-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:37:57.115: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-47f5f530-504e-11eb-8655-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Jan  6 18:37:57.183: INFO: Waiting for pod pod-configmaps-47f5f530-504e-11eb-8655-0242ac110009 to disappear
Jan  6 18:37:57.193: INFO: Pod pod-configmaps-47f5f530-504e-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:37:57.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-pkggd" for this suite.
Jan  6 18:38:03.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:38:03.272: INFO: namespace: e2e-tests-configmap-pkggd, resource: bindings, ignored listing per whitelist
Jan  6 18:38:03.306: INFO: namespace e2e-tests-configmap-pkggd deletion completed in 6.109591553s

• [SLOW TEST:12.457 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:38:03.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-4f642189-504e-11eb-8655-0242ac110009
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-4f642189-504e-11eb-8655-0242ac110009
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:38:09.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-rbxrh" for this suite.
Jan  6 18:38:31.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:38:31.615: INFO: namespace: e2e-tests-configmap-rbxrh, resource: bindings, ignored listing per whitelist
Jan  6 18:38:31.615: INFO: namespace e2e-tests-configmap-rbxrh deletion completed in 22.108541773s

• [SLOW TEST:28.309 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:38:31.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jan  6 18:38:31.728: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix217268733/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:38:31.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xwh9g" for this suite.
Jan  6 18:38:37.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:38:37.841: INFO: namespace: e2e-tests-kubectl-xwh9g, resource: bindings, ignored listing per whitelist
Jan  6 18:38:37.911: INFO: namespace e2e-tests-kubectl-xwh9g deletion completed in 6.100889423s

• [SLOW TEST:6.297 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:38:37.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan  6 18:38:44.762: INFO: 0 pods remaining
Jan  6 18:38:44.762: INFO: 0 pods has nil DeletionTimestamp
Jan  6 18:38:44.762: INFO: 
STEP: Gathering metrics
W0106 18:38:45.186105       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  6 18:38:45.186: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:38:45.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-r7tnv" for this suite.
Jan  6 18:38:51.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:38:51.640: INFO: namespace: e2e-tests-gc-r7tnv, resource: bindings, ignored listing per whitelist
Jan  6 18:38:51.656: INFO: namespace e2e-tests-gc-r7tnv deletion completed in 6.467356907s

• [SLOW TEST:13.744 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:38:51.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  6 18:38:51.936: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"6c435daa-504e-11eb-8302-0242ac120002", Controller:(*bool)(0xc001da6a4a), BlockOwnerDeletion:(*bool)(0xc001da6a4b)}}
Jan  6 18:38:52.008: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6c406d43-504e-11eb-8302-0242ac120002", Controller:(*bool)(0xc001a1f14a), BlockOwnerDeletion:(*bool)(0xc001a1f14b)}}
Jan  6 18:38:52.026: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"6c4113ef-504e-11eb-8302-0242ac120002", Controller:(*bool)(0xc00157a302), BlockOwnerDeletion:(*bool)(0xc00157a303)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:38:57.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-ch4cw" for this suite.
Jan  6 18:39:03.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:39:03.138: INFO: namespace: e2e-tests-gc-ch4cw, resource: bindings, ignored listing per whitelist
Jan  6 18:39:03.145: INFO: namespace e2e-tests-gc-ch4cw deletion completed in 6.09542545s

• [SLOW TEST:11.489 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:39:03.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-7309aef3-504e-11eb-8655-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  6 18:39:03.294: INFO: Waiting up to 5m0s for pod "pod-configmaps-730b855c-504e-11eb-8655-0242ac110009" in namespace "e2e-tests-configmap-z2fl8" to be "success or failure"
Jan  6 18:39:03.310: INFO: Pod "pod-configmaps-730b855c-504e-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.20417ms
Jan  6 18:39:05.314: INFO: Pod "pod-configmaps-730b855c-504e-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020496s
Jan  6 18:39:07.318: INFO: Pod "pod-configmaps-730b855c-504e-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024206604s
STEP: Saw pod success
Jan  6 18:39:07.318: INFO: Pod "pod-configmaps-730b855c-504e-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:39:07.320: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-730b855c-504e-11eb-8655-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Jan  6 18:39:07.347: INFO: Waiting for pod pod-configmaps-730b855c-504e-11eb-8655-0242ac110009 to disappear
Jan  6 18:39:07.398: INFO: Pod pod-configmaps-730b855c-504e-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:39:07.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-z2fl8" for this suite.
Jan  6 18:39:13.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:39:13.434: INFO: namespace: e2e-tests-configmap-z2fl8, resource: bindings, ignored listing per whitelist
Jan  6 18:39:13.507: INFO: namespace e2e-tests-configmap-z2fl8 deletion completed in 6.106558472s

• [SLOW TEST:10.362 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:39:13.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  6 18:39:13.639: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7933974d-504e-11eb-8655-0242ac110009" in namespace "e2e-tests-downward-api-94fqc" to be "success or failure"
Jan  6 18:39:13.655: INFO: Pod "downwardapi-volume-7933974d-504e-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 15.885815ms
Jan  6 18:39:15.691: INFO: Pod "downwardapi-volume-7933974d-504e-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05192014s
Jan  6 18:39:17.695: INFO: Pod "downwardapi-volume-7933974d-504e-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055494396s
STEP: Saw pod success
Jan  6 18:39:17.695: INFO: Pod "downwardapi-volume-7933974d-504e-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:39:17.697: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-7933974d-504e-11eb-8655-0242ac110009 container client-container: 
STEP: delete the pod
Jan  6 18:39:17.731: INFO: Waiting for pod downwardapi-volume-7933974d-504e-11eb-8655-0242ac110009 to disappear
Jan  6 18:39:17.756: INFO: Pod downwardapi-volume-7933974d-504e-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:39:17.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-94fqc" for this suite.
Jan  6 18:39:23.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:39:23.795: INFO: namespace: e2e-tests-downward-api-94fqc, resource: bindings, ignored listing per whitelist
Jan  6 18:39:23.859: INFO: namespace e2e-tests-downward-api-94fqc deletion completed in 6.099252666s

• [SLOW TEST:10.351 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:39:23.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  6 18:39:23.961: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f5bd2f9-504e-11eb-8655-0242ac110009" in namespace "e2e-tests-downward-api-sglcc" to be "success or failure"
Jan  6 18:39:23.987: INFO: Pod "downwardapi-volume-7f5bd2f9-504e-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 25.807873ms
Jan  6 18:39:26.009: INFO: Pod "downwardapi-volume-7f5bd2f9-504e-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047686086s
Jan  6 18:39:28.012: INFO: Pod "downwardapi-volume-7f5bd2f9-504e-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050786967s
STEP: Saw pod success
Jan  6 18:39:28.012: INFO: Pod "downwardapi-volume-7f5bd2f9-504e-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:39:28.014: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-7f5bd2f9-504e-11eb-8655-0242ac110009 container client-container: 
STEP: delete the pod
Jan  6 18:39:28.166: INFO: Waiting for pod downwardapi-volume-7f5bd2f9-504e-11eb-8655-0242ac110009 to disappear
Jan  6 18:39:28.202: INFO: Pod downwardapi-volume-7f5bd2f9-504e-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:39:28.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-sglcc" for this suite.
Jan  6 18:39:34.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:39:34.257: INFO: namespace: e2e-tests-downward-api-sglcc, resource: bindings, ignored listing per whitelist
Jan  6 18:39:34.314: INFO: namespace e2e-tests-downward-api-sglcc deletion completed in 6.108107322s

• [SLOW TEST:10.455 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:39:34.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  6 18:39:42.488: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  6 18:39:42.494: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  6 18:39:44.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  6 18:39:44.499: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  6 18:39:46.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  6 18:39:46.499: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  6 18:39:48.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  6 18:39:48.499: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  6 18:39:50.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  6 18:39:50.499: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  6 18:39:52.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  6 18:39:52.499: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  6 18:39:54.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  6 18:39:54.499: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  6 18:39:56.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  6 18:39:56.499: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  6 18:39:58.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  6 18:39:58.499: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  6 18:40:00.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  6 18:40:00.500: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  6 18:40:02.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  6 18:40:02.499: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  6 18:40:04.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  6 18:40:04.498: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  6 18:40:06.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  6 18:40:06.499: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:40:06.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-tvqc4" for this suite.
Jan  6 18:40:28.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:40:28.552: INFO: namespace: e2e-tests-container-lifecycle-hook-tvqc4, resource: bindings, ignored listing per whitelist
Jan  6 18:40:28.624: INFO: namespace e2e-tests-container-lifecycle-hook-tvqc4 deletion completed in 22.114140478s

• [SLOW TEST:54.309 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:40:28.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jan  6 18:40:28.771: INFO: Waiting up to 5m0s for pod "var-expansion-a5f87638-504e-11eb-8655-0242ac110009" in namespace "e2e-tests-var-expansion-42sx6" to be "success or failure"
Jan  6 18:40:28.785: INFO: Pod "var-expansion-a5f87638-504e-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 13.255244ms
Jan  6 18:40:30.836: INFO: Pod "var-expansion-a5f87638-504e-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064678521s
Jan  6 18:40:32.841: INFO: Pod "var-expansion-a5f87638-504e-11eb-8655-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.06914794s
Jan  6 18:40:34.845: INFO: Pod "var-expansion-a5f87638-504e-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.073309592s
STEP: Saw pod success
Jan  6 18:40:34.845: INFO: Pod "var-expansion-a5f87638-504e-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:40:34.848: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-a5f87638-504e-11eb-8655-0242ac110009 container dapi-container: 
STEP: delete the pod
Jan  6 18:40:34.909: INFO: Waiting for pod var-expansion-a5f87638-504e-11eb-8655-0242ac110009 to disappear
Jan  6 18:40:34.917: INFO: Pod var-expansion-a5f87638-504e-11eb-8655-0242ac110009 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:40:34.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-42sx6" for this suite.
Jan  6 18:40:40.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:40:40.970: INFO: namespace: e2e-tests-var-expansion-42sx6, resource: bindings, ignored listing per whitelist
Jan  6 18:40:41.023: INFO: namespace e2e-tests-var-expansion-42sx6 deletion completed in 6.103172841s

• [SLOW TEST:12.399 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:40:41.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  6 18:40:41.156: INFO: Waiting up to 5m0s for pod "downward-api-ad6251db-504e-11eb-8655-0242ac110009" in namespace "e2e-tests-downward-api-fgl4n" to be "success or failure"
Jan  6 18:40:41.174: INFO: Pod "downward-api-ad6251db-504e-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 18.919812ms
Jan  6 18:40:43.232: INFO: Pod "downward-api-ad6251db-504e-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076020637s
Jan  6 18:40:45.363: INFO: Pod "downward-api-ad6251db-504e-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.207597666s
STEP: Saw pod success
Jan  6 18:40:45.363: INFO: Pod "downward-api-ad6251db-504e-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:40:45.371: INFO: Trying to get logs from node hunter-worker pod downward-api-ad6251db-504e-11eb-8655-0242ac110009 container dapi-container: 
STEP: delete the pod
Jan  6 18:40:45.414: INFO: Waiting for pod downward-api-ad6251db-504e-11eb-8655-0242ac110009 to disappear
Jan  6 18:40:45.429: INFO: Pod downward-api-ad6251db-504e-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:40:45.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fgl4n" for this suite.
Jan  6 18:40:51.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:40:51.546: INFO: namespace: e2e-tests-downward-api-fgl4n, resource: bindings, ignored listing per whitelist
Jan  6 18:40:51.562: INFO: namespace e2e-tests-downward-api-fgl4n deletion completed in 6.12979222s

• [SLOW TEST:10.539 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:40:51.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  6 18:40:51.681: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jan  6 18:40:51.704: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-m65kw/daemonsets","resourceVersion":"18065873"},"items":null}

Jan  6 18:40:51.707: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-m65kw/pods","resourceVersion":"18065873"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:40:51.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-m65kw" for this suite.
Jan  6 18:40:57.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:40:57.747: INFO: namespace: e2e-tests-daemonsets-m65kw, resource: bindings, ignored listing per whitelist
Jan  6 18:40:57.827: INFO: namespace e2e-tests-daemonsets-m65kw deletion completed in 6.108723897s

S [SKIPPING] [6.264 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jan  6 18:40:51.681: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:40:57.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jan  6 18:40:57.997: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-l8ds6" to be "success or failure"
Jan  6 18:40:58.030: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 32.912382ms
Jan  6 18:41:00.036: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038342492s
Jan  6 18:41:02.040: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04263538s
Jan  6 18:41:04.045: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047378173s
STEP: Saw pod success
Jan  6 18:41:04.045: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan  6 18:41:04.048: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan  6 18:41:04.068: INFO: Waiting for pod pod-host-path-test to disappear
Jan  6 18:41:04.073: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:41:04.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-l8ds6" for this suite.
Jan  6 18:41:10.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:41:10.186: INFO: namespace: e2e-tests-hostpath-l8ds6, resource: bindings, ignored listing per whitelist
Jan  6 18:41:10.200: INFO: namespace e2e-tests-hostpath-l8ds6 deletion completed in 6.123063981s

• [SLOW TEST:12.373 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:41:10.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  6 18:41:10.346: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan  6 18:41:15.350: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  6 18:41:15.350: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  6 18:41:15.369: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-49nnz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-49nnz/deployments/test-cleanup-deployment,UID:c1c59d2d-504e-11eb-8302-0242ac120002,ResourceVersion:18065971,Generation:1,CreationTimestamp:2021-01-06 18:41:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan  6 18:41:15.376: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Jan  6 18:41:15.376: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan  6 18:41:15.376: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-49nnz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-49nnz/replicasets/test-cleanup-controller,UID:bec7b420-504e-11eb-8302-0242ac120002,ResourceVersion:18065972,Generation:1,CreationTimestamp:2021-01-06 18:41:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment c1c59d2d-504e-11eb-8302-0242ac120002 0xc0023e6197 0xc0023e6198}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  6 18:41:15.383: INFO: Pod "test-cleanup-controller-kfb9j" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-kfb9j,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-49nnz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49nnz/pods/test-cleanup-controller-kfb9j,UID:bec9c0ef-504e-11eb-8302-0242ac120002,ResourceVersion:18065964,Generation:0,CreationTimestamp:2021-01-06 18:41:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller bec7b420-504e-11eb-8302-0242ac120002 0xc0028967df 0xc0028967f0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k5mrl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5mrl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5mrl true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002896860} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002896910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:41:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:41:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:41:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:41:10 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.79,StartTime:2021-01-06 18:41:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-06 18:41:12 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1b98494627da693551099a5b3aec3dad9cbe264dc5c7416fb86e2e7dc31fe71b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:41:15.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-49nnz" for this suite.
Jan  6 18:41:23.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:41:23.620: INFO: namespace: e2e-tests-deployment-49nnz, resource: bindings, ignored listing per whitelist
Jan  6 18:41:23.648: INFO: namespace e2e-tests-deployment-49nnz deletion completed in 8.186767851s

• [SLOW TEST:13.447 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:41:23.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan  6 18:41:23.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cjjdb'
Jan  6 18:41:24.128: INFO: stderr: ""
Jan  6 18:41:24.128: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  6 18:41:24.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cjjdb'
Jan  6 18:41:24.269: INFO: stderr: ""
Jan  6 18:41:24.269: INFO: stdout: "update-demo-nautilus-88cst update-demo-nautilus-kpqwr "
Jan  6 18:41:24.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-88cst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cjjdb'
Jan  6 18:41:24.373: INFO: stderr: ""
Jan  6 18:41:24.373: INFO: stdout: ""
Jan  6 18:41:24.373: INFO: update-demo-nautilus-88cst is created but not running
Jan  6 18:41:29.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cjjdb'
Jan  6 18:41:29.472: INFO: stderr: ""
Jan  6 18:41:29.472: INFO: stdout: "update-demo-nautilus-88cst update-demo-nautilus-kpqwr "
Jan  6 18:41:29.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-88cst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cjjdb'
Jan  6 18:41:29.575: INFO: stderr: ""
Jan  6 18:41:29.575: INFO: stdout: "true"
Jan  6 18:41:29.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-88cst -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cjjdb'
Jan  6 18:41:29.676: INFO: stderr: ""
Jan  6 18:41:29.676: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  6 18:41:29.676: INFO: validating pod update-demo-nautilus-88cst
Jan  6 18:41:29.681: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  6 18:41:29.681: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  6 18:41:29.681: INFO: update-demo-nautilus-88cst is verified up and running
Jan  6 18:41:29.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kpqwr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cjjdb'
Jan  6 18:41:29.782: INFO: stderr: ""
Jan  6 18:41:29.783: INFO: stdout: "true"
Jan  6 18:41:29.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kpqwr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cjjdb'
Jan  6 18:41:29.887: INFO: stderr: ""
Jan  6 18:41:29.887: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  6 18:41:29.887: INFO: validating pod update-demo-nautilus-kpqwr
Jan  6 18:41:29.891: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  6 18:41:29.891: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  6 18:41:29.891: INFO: update-demo-nautilus-kpqwr is verified up and running
STEP: using delete to clean up resources
Jan  6 18:41:29.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-cjjdb'
Jan  6 18:41:29.985: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  6 18:41:29.985: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  6 18:41:29.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-cjjdb'
Jan  6 18:41:30.095: INFO: stderr: "No resources found.\n"
Jan  6 18:41:30.095: INFO: stdout: ""
Jan  6 18:41:30.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-cjjdb -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  6 18:41:30.188: INFO: stderr: ""
Jan  6 18:41:30.188: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:41:30.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-cjjdb" for this suite.
Jan  6 18:41:36.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:41:36.482: INFO: namespace: e2e-tests-kubectl-cjjdb, resource: bindings, ignored listing per whitelist
Jan  6 18:41:36.569: INFO: namespace e2e-tests-kubectl-cjjdb deletion completed in 6.376709088s

• [SLOW TEST:12.920 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:41:36.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:41:40.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-ktm8g" for this suite.
Jan  6 18:41:46.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:41:46.804: INFO: namespace: e2e-tests-kubelet-test-ktm8g, resource: bindings, ignored listing per whitelist
Jan  6 18:41:46.833: INFO: namespace e2e-tests-kubelet-test-ktm8g deletion completed in 6.107208249s

• [SLOW TEST:10.264 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:41:46.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  6 18:41:46.962: INFO: Creating ReplicaSet my-hostname-basic-d49c7cf2-504e-11eb-8655-0242ac110009
Jan  6 18:41:46.984: INFO: Pod name my-hostname-basic-d49c7cf2-504e-11eb-8655-0242ac110009: Found 0 pods out of 1
Jan  6 18:41:51.989: INFO: Pod name my-hostname-basic-d49c7cf2-504e-11eb-8655-0242ac110009: Found 1 pods out of 1
Jan  6 18:41:51.989: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d49c7cf2-504e-11eb-8655-0242ac110009" is running
Jan  6 18:41:51.993: INFO: Pod "my-hostname-basic-d49c7cf2-504e-11eb-8655-0242ac110009-hsvs9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-06 18:41:47 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-06 18:41:50 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-06 18:41:50 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-06 18:41:46 +0000 UTC Reason: Message:}])
Jan  6 18:41:51.993: INFO: Trying to dial the pod
Jan  6 18:41:57.006: INFO: Controller my-hostname-basic-d49c7cf2-504e-11eb-8655-0242ac110009: Got expected result from replica 1 [my-hostname-basic-d49c7cf2-504e-11eb-8655-0242ac110009-hsvs9]: "my-hostname-basic-d49c7cf2-504e-11eb-8655-0242ac110009-hsvs9", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:41:57.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-j7cl4" for this suite.
Jan  6 18:42:03.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:42:03.109: INFO: namespace: e2e-tests-replicaset-j7cl4, resource: bindings, ignored listing per whitelist
Jan  6 18:42:03.161: INFO: namespace e2e-tests-replicaset-j7cl4 deletion completed in 6.152112382s

• [SLOW TEST:16.329 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:42:03.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  6 18:42:07.825: INFO: Successfully updated pod "pod-update-de53833d-504e-11eb-8655-0242ac110009"
STEP: verifying the updated pod is in kubernetes
Jan  6 18:42:07.830: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:42:07.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-8srv8" for this suite.
Jan  6 18:42:30.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:42:30.339: INFO: namespace: e2e-tests-pods-8srv8, resource: bindings, ignored listing per whitelist
Jan  6 18:42:30.361: INFO: namespace e2e-tests-pods-8srv8 deletion completed in 22.528366433s

• [SLOW TEST:27.199 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:42:30.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-ee8cee6a-504e-11eb-8655-0242ac110009
STEP: Creating a pod to test consume secrets
Jan  6 18:42:30.493: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ee8d860c-504e-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-lcfd5" to be "success or failure"
Jan  6 18:42:30.499: INFO: Pod "pod-projected-secrets-ee8d860c-504e-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 5.746867ms
Jan  6 18:42:33.485: INFO: Pod "pod-projected-secrets-ee8d860c-504e-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.991659755s
Jan  6 18:42:35.509: INFO: Pod "pod-projected-secrets-ee8d860c-504e-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.015613977s
STEP: Saw pod success
Jan  6 18:42:35.509: INFO: Pod "pod-projected-secrets-ee8d860c-504e-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:42:35.512: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-ee8d860c-504e-11eb-8655-0242ac110009 container projected-secret-volume-test: 
STEP: delete the pod
Jan  6 18:42:35.535: INFO: Waiting for pod pod-projected-secrets-ee8d860c-504e-11eb-8655-0242ac110009 to disappear
Jan  6 18:42:35.591: INFO: Pod pod-projected-secrets-ee8d860c-504e-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:42:35.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lcfd5" for this suite.
Jan  6 18:42:41.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:42:41.662: INFO: namespace: e2e-tests-projected-lcfd5, resource: bindings, ignored listing per whitelist
Jan  6 18:42:41.739: INFO: namespace e2e-tests-projected-lcfd5 deletion completed in 6.14376493s

• [SLOW TEST:11.378 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:42:41.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  6 18:42:41.896: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan  6 18:42:41.929: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan  6 18:42:46.934: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  6 18:42:46.934: INFO: Creating deployment "test-rolling-update-deployment"
Jan  6 18:42:46.939: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan  6 18:42:46.948: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan  6 18:42:48.956: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan  6 18:42:48.959: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745555367, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745555367, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745555367, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745555366, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  6 18:42:50.964: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  6 18:42:50.973: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-nq82q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nq82q/deployments/test-rolling-update-deployment,UID:f85bb1a6-504e-11eb-8302-0242ac120002,ResourceVersion:18066393,Generation:1,CreationTimestamp:2021-01-06 18:42:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2021-01-06 18:42:47 +0000 UTC 2021-01-06 18:42:47 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2021-01-06 18:42:50 +0000 UTC 2021-01-06 18:42:46 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  6 18:42:50.977: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-nq82q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nq82q/replicasets/test-rolling-update-deployment-75db98fb4c,UID:f85e6717-504e-11eb-8302-0242ac120002,ResourceVersion:18066383,Generation:1,CreationTimestamp:2021-01-06 18:42:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment f85bb1a6-504e-11eb-8302-0242ac120002 0xc00194bc77 0xc00194bc78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  6 18:42:50.977: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan  6 18:42:50.977: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-nq82q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nq82q/replicasets/test-rolling-update-controller,UID:f55aede6-504e-11eb-8302-0242ac120002,ResourceVersion:18066392,Generation:2,CreationTimestamp:2021-01-06 18:42:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment f85bb1a6-504e-11eb-8302-0242ac120002 0xc001865b57 0xc001865b58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  6 18:42:50.980: INFO: Pod "test-rolling-update-deployment-75db98fb4c-m9vdw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-m9vdw,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-nq82q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nq82q/pods/test-rolling-update-deployment-75db98fb4c-m9vdw,UID:f85fd754-504e-11eb-8302-0242ac120002,ResourceVersion:18066382,Generation:0,CreationTimestamp:2021-01-06 18:42:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c f85e6717-504e-11eb-8302-0242ac120002 0xc001b80c57 0xc001b80c58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wcj25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wcj25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-wcj25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b80d80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b80e00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:42:47 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:42:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:42:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:42:46 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.83,StartTime:2021-01-06 18:42:47 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2021-01-06 18:42:49 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://af8d54c9d4e0292440344f2dfafbb8f31177c1504ebc39c28f2e507bf302040a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:42:50.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-nq82q" for this suite.
Jan  6 18:42:57.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:42:57.190: INFO: namespace: e2e-tests-deployment-nq82q, resource: bindings, ignored listing per whitelist
Jan  6 18:42:57.240: INFO: namespace e2e-tests-deployment-nq82q deletion completed in 6.255888803s

• [SLOW TEST:15.501 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:42:57.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-ptcf
STEP: Creating a pod to test atomic-volume-subpath
Jan  6 18:42:57.386: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-ptcf" in namespace "e2e-tests-subpath-dcdd8" to be "success or failure"
Jan  6 18:42:57.391: INFO: Pod "pod-subpath-test-secret-ptcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153382ms
Jan  6 18:42:59.395: INFO: Pod "pod-subpath-test-secret-ptcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008304564s
Jan  6 18:43:01.413: INFO: Pod "pod-subpath-test-secret-ptcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026466387s
Jan  6 18:43:03.418: INFO: Pod "pod-subpath-test-secret-ptcf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030990129s
Jan  6 18:43:05.421: INFO: Pod "pod-subpath-test-secret-ptcf": Phase="Running", Reason="", readiness=false. Elapsed: 8.034645487s
Jan  6 18:43:07.426: INFO: Pod "pod-subpath-test-secret-ptcf": Phase="Running", Reason="", readiness=false. Elapsed: 10.039067745s
Jan  6 18:43:09.429: INFO: Pod "pod-subpath-test-secret-ptcf": Phase="Running", Reason="", readiness=false. Elapsed: 12.04218514s
Jan  6 18:43:11.433: INFO: Pod "pod-subpath-test-secret-ptcf": Phase="Running", Reason="", readiness=false. Elapsed: 14.046047614s
Jan  6 18:43:13.437: INFO: Pod "pod-subpath-test-secret-ptcf": Phase="Running", Reason="", readiness=false. Elapsed: 16.050601595s
Jan  6 18:43:15.443: INFO: Pod "pod-subpath-test-secret-ptcf": Phase="Running", Reason="", readiness=false. Elapsed: 18.056850013s
Jan  6 18:43:17.448: INFO: Pod "pod-subpath-test-secret-ptcf": Phase="Running", Reason="", readiness=false. Elapsed: 20.061065766s
Jan  6 18:43:19.452: INFO: Pod "pod-subpath-test-secret-ptcf": Phase="Running", Reason="", readiness=false. Elapsed: 22.065432608s
Jan  6 18:43:21.457: INFO: Pod "pod-subpath-test-secret-ptcf": Phase="Running", Reason="", readiness=false. Elapsed: 24.070214593s
Jan  6 18:43:23.461: INFO: Pod "pod-subpath-test-secret-ptcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.074225146s
STEP: Saw pod success
Jan  6 18:43:23.461: INFO: Pod "pod-subpath-test-secret-ptcf" satisfied condition "success or failure"
Jan  6 18:43:23.464: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-ptcf container test-container-subpath-secret-ptcf: 
STEP: delete the pod
Jan  6 18:43:23.666: INFO: Waiting for pod pod-subpath-test-secret-ptcf to disappear
Jan  6 18:43:23.875: INFO: Pod pod-subpath-test-secret-ptcf no longer exists
STEP: Deleting pod pod-subpath-test-secret-ptcf
Jan  6 18:43:23.875: INFO: Deleting pod "pod-subpath-test-secret-ptcf" in namespace "e2e-tests-subpath-dcdd8"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:43:23.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-dcdd8" for this suite.
Jan  6 18:43:29.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:43:29.977: INFO: namespace: e2e-tests-subpath-dcdd8, resource: bindings, ignored listing per whitelist
Jan  6 18:43:30.033: INFO: namespace e2e-tests-subpath-dcdd8 deletion completed in 6.107821239s

• [SLOW TEST:32.793 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:43:30.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  6 18:43:30.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:43:34.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-2bspc" for this suite.
Jan  6 18:44:26.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:44:26.253: INFO: namespace: e2e-tests-pods-2bspc, resource: bindings, ignored listing per whitelist
Jan  6 18:44:26.324: INFO: namespace e2e-tests-pods-2bspc deletion completed in 52.135650149s

• [SLOW TEST:56.289 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:44:26.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  6 18:44:26.406: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:44:27.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-jhj2b" for this suite.
Jan  6 18:44:33.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:44:33.713: INFO: namespace: e2e-tests-custom-resource-definition-jhj2b, resource: bindings, ignored listing per whitelist
Jan  6 18:44:33.737: INFO: namespace e2e-tests-custom-resource-definition-jhj2b deletion completed in 6.124112096s

• [SLOW TEST:7.413 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:44:33.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jan  6 18:44:37.851: INFO: Pod pod-hostip-38112f86-504f-11eb-8655-0242ac110009 has hostIP: 172.18.0.3
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:44:37.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-9s89x" for this suite.
Jan  6 18:44:59.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:44:59.946: INFO: namespace: e2e-tests-pods-9s89x, resource: bindings, ignored listing per whitelist
Jan  6 18:44:59.972: INFO: namespace e2e-tests-pods-9s89x deletion completed in 22.117469804s

• [SLOW TEST:26.235 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:44:59.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-76m75
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  6 18:45:00.091: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  6 18:45:26.269: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.88:8080/dial?request=hostName&protocol=http&host=10.244.2.87&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-76m75 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 18:45:26.269: INFO: >>> kubeConfig: /root/.kube/config
I0106 18:45:26.306215       6 log.go:172] (0xc001a444d0) (0xc002be4140) Create stream
I0106 18:45:26.306255       6 log.go:172] (0xc001a444d0) (0xc002be4140) Stream added, broadcasting: 1
I0106 18:45:26.308513       6 log.go:172] (0xc001a444d0) Reply frame received for 1
I0106 18:45:26.308563       6 log.go:172] (0xc001a444d0) (0xc001efe0a0) Create stream
I0106 18:45:26.308579       6 log.go:172] (0xc001a444d0) (0xc001efe0a0) Stream added, broadcasting: 3
I0106 18:45:26.309511       6 log.go:172] (0xc001a444d0) Reply frame received for 3
I0106 18:45:26.309544       6 log.go:172] (0xc001a444d0) (0xc0014d60a0) Create stream
I0106 18:45:26.309556       6 log.go:172] (0xc001a444d0) (0xc0014d60a0) Stream added, broadcasting: 5
I0106 18:45:26.310469       6 log.go:172] (0xc001a444d0) Reply frame received for 5
I0106 18:45:26.399561       6 log.go:172] (0xc001a444d0) Data frame received for 3
I0106 18:45:26.399593       6 log.go:172] (0xc001efe0a0) (3) Data frame handling
I0106 18:45:26.399616       6 log.go:172] (0xc001efe0a0) (3) Data frame sent
I0106 18:45:26.400267       6 log.go:172] (0xc001a444d0) Data frame received for 3
I0106 18:45:26.400313       6 log.go:172] (0xc001efe0a0) (3) Data frame handling
I0106 18:45:26.400454       6 log.go:172] (0xc001a444d0) Data frame received for 5
I0106 18:45:26.400477       6 log.go:172] (0xc0014d60a0) (5) Data frame handling
I0106 18:45:26.402724       6 log.go:172] (0xc001a444d0) Data frame received for 1
I0106 18:45:26.402760       6 log.go:172] (0xc002be4140) (1) Data frame handling
I0106 18:45:26.402787       6 log.go:172] (0xc002be4140) (1) Data frame sent
I0106 18:45:26.402808       6 log.go:172] (0xc001a444d0) (0xc002be4140) Stream removed, broadcasting: 1
I0106 18:45:26.402828       6 log.go:172] (0xc001a444d0) Go away received
I0106 18:45:26.403007       6 log.go:172] (0xc001a444d0) (0xc002be4140) Stream removed, broadcasting: 1
I0106 18:45:26.403032       6 log.go:172] (0xc001a444d0) (0xc001efe0a0) Stream removed, broadcasting: 3
I0106 18:45:26.403050       6 log.go:172] (0xc001a444d0) (0xc0014d60a0) Stream removed, broadcasting: 5
Jan  6 18:45:26.403: INFO: Waiting for endpoints: map[]
Jan  6 18:45:26.406: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.88:8080/dial?request=hostName&protocol=http&host=10.244.1.134&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-76m75 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 18:45:26.406: INFO: >>> kubeConfig: /root/.kube/config
I0106 18:45:26.435479       6 log.go:172] (0xc001a449a0) (0xc002be4320) Create stream
I0106 18:45:26.435509       6 log.go:172] (0xc001a449a0) (0xc002be4320) Stream added, broadcasting: 1
I0106 18:45:26.439282       6 log.go:172] (0xc001a449a0) Reply frame received for 1
I0106 18:45:26.439321       6 log.go:172] (0xc001a449a0) (0xc001efe140) Create stream
I0106 18:45:26.439331       6 log.go:172] (0xc001a449a0) (0xc001efe140) Stream added, broadcasting: 3
I0106 18:45:26.440031       6 log.go:172] (0xc001a449a0) Reply frame received for 3
I0106 18:45:26.440059       6 log.go:172] (0xc001a449a0) (0xc002be43c0) Create stream
I0106 18:45:26.440067       6 log.go:172] (0xc001a449a0) (0xc002be43c0) Stream added, broadcasting: 5
I0106 18:45:26.441127       6 log.go:172] (0xc001a449a0) Reply frame received for 5
I0106 18:45:26.516248       6 log.go:172] (0xc001a449a0) Data frame received for 3
I0106 18:45:26.516282       6 log.go:172] (0xc001efe140) (3) Data frame handling
I0106 18:45:26.516303       6 log.go:172] (0xc001efe140) (3) Data frame sent
I0106 18:45:26.517039       6 log.go:172] (0xc001a449a0) Data frame received for 3
I0106 18:45:26.517071       6 log.go:172] (0xc001efe140) (3) Data frame handling
I0106 18:45:26.517122       6 log.go:172] (0xc001a449a0) Data frame received for 5
I0106 18:45:26.517177       6 log.go:172] (0xc002be43c0) (5) Data frame handling
I0106 18:45:26.518750       6 log.go:172] (0xc001a449a0) Data frame received for 1
I0106 18:45:26.518784       6 log.go:172] (0xc002be4320) (1) Data frame handling
I0106 18:45:26.518795       6 log.go:172] (0xc002be4320) (1) Data frame sent
I0106 18:45:26.518807       6 log.go:172] (0xc001a449a0) (0xc002be4320) Stream removed, broadcasting: 1
I0106 18:45:26.518838       6 log.go:172] (0xc001a449a0) Go away received
I0106 18:45:26.518948       6 log.go:172] (0xc001a449a0) (0xc002be4320) Stream removed, broadcasting: 1
I0106 18:45:26.518968       6 log.go:172] (0xc001a449a0) (0xc001efe140) Stream removed, broadcasting: 3
I0106 18:45:26.518976       6 log.go:172] (0xc001a449a0) (0xc002be43c0) Stream removed, broadcasting: 5
Jan  6 18:45:26.519: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:45:26.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-76m75" for this suite.
Jan  6 18:45:50.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:45:50.598: INFO: namespace: e2e-tests-pod-network-test-76m75, resource: bindings, ignored listing per whitelist
Jan  6 18:45:50.629: INFO: namespace e2e-tests-pod-network-test-76m75 deletion completed in 24.106431017s

• [SLOW TEST:50.657 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:45:50.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  6 18:45:50.701: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  6 18:45:50.715: INFO: Waiting for terminating namespaces to be deleted...
Jan  6 18:45:50.717: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Jan  6 18:45:50.723: INFO: kube-proxy-ljths from kube-system started at 2020-09-23 08:24:25 +0000 UTC (1 container statuses recorded)
Jan  6 18:45:50.723: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  6 18:45:50.723: INFO: kindnet-8chxg from kube-system started at 2020-09-23 08:24:25 +0000 UTC (1 container statuses recorded)
Jan  6 18:45:50.723: INFO: 	Container kindnet-cni ready: true, restart count 0
Jan  6 18:45:50.723: INFO: chaos-daemon-6czfr from default started at 2020-11-23 03:40:45 +0000 UTC (1 container statuses recorded)
Jan  6 18:45:50.723: INFO: 	Container chaos-daemon ready: true, restart count 0
Jan  6 18:45:50.723: INFO: coredns-54ff9cd656-grddq from kube-system started at 2020-09-23 08:24:45 +0000 UTC (1 container statuses recorded)
Jan  6 18:45:50.723: INFO: 	Container coredns ready: true, restart count 0
Jan  6 18:45:50.723: INFO: coredns-54ff9cd656-mplq2 from kube-system started at 2020-09-23 08:24:45 +0000 UTC (1 container statuses recorded)
Jan  6 18:45:50.723: INFO: 	Container coredns ready: true, restart count 0
Jan  6 18:45:50.723: INFO: local-path-provisioner-65f5ddcc-46m7g from local-path-storage started at 2020-09-23 08:24:45 +0000 UTC (1 container statuses recorded)
Jan  6 18:45:50.723: INFO: 	Container local-path-provisioner ready: true, restart count 41
Jan  6 18:45:50.723: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Jan  6 18:45:50.727: INFO: chaos-controller-manager-5c78c48d45-tq7m7 from default started at 2020-11-23 03:40:45 +0000 UTC (1 container statuses recorded)
Jan  6 18:45:50.727: INFO: 	Container chaos-mesh ready: true, restart count 0
Jan  6 18:45:50.727: INFO: chaos-daemon-9ptbc from default started at 2020-11-23 03:40:45 +0000 UTC (1 container statuses recorded)
Jan  6 18:45:50.727: INFO: 	Container chaos-daemon ready: true, restart count 0
Jan  6 18:45:50.727: INFO: kube-proxy-mg87j from kube-system started at 2020-09-23 08:24:25 +0000 UTC (1 container statuses recorded)
Jan  6 18:45:50.727: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  6 18:45:50.727: INFO: kindnet-8vqrg from kube-system started at 2020-09-23 08:24:26 +0000 UTC (1 container statuses recorded)
Jan  6 18:45:50.727: INFO: 	Container kindnet-cni ready: true, restart count 0
Jan  6 18:45:50.727: INFO: coredns-coredns-5d8cb876b4-kkw4n from startup-test started at 2021-01-01 20:30:53 +0000 UTC (1 container statuses recorded)
Jan  6 18:45:50.727: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.1657b8f430f771b1], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:45:51.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-4gtrk" for this suite.
Jan  6 18:45:57.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:45:57.792: INFO: namespace: e2e-tests-sched-pred-4gtrk, resource: bindings, ignored listing per whitelist
Jan  6 18:45:57.852: INFO: namespace e2e-tests-sched-pred-4gtrk deletion completed in 6.102557917s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.223 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:45:57.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  6 18:45:57.954: INFO: Creating deployment "nginx-deployment"
Jan  6 18:45:57.968: INFO: Waiting for observed generation 1
Jan  6 18:45:59.977: INFO: Waiting for all required pods to come up
Jan  6 18:45:59.981: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan  6 18:46:09.993: INFO: Waiting for deployment "nginx-deployment" to complete
Jan  6 18:46:09.998: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan  6 18:46:10.003: INFO: Updating deployment nginx-deployment
Jan  6 18:46:10.003: INFO: Waiting for observed generation 2
Jan  6 18:46:12.063: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan  6 18:46:12.065: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan  6 18:46:12.068: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  6 18:46:12.074: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan  6 18:46:12.074: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan  6 18:46:12.077: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  6 18:46:12.084: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan  6 18:46:12.084: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan  6 18:46:12.089: INFO: Updating deployment nginx-deployment
Jan  6 18:46:12.090: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan  6 18:46:12.264: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan  6 18:46:12.891: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  6 18:46:13.222: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-d96b6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d96b6/deployments/nginx-deployment,UID:6a36f34e-504f-11eb-8302-0242ac120002,ResourceVersion:18067200,Generation:3,CreationTimestamp:2021-01-06 18:45:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2021-01-06 18:46:10 +0000 UTC 2021-01-06 18:45:57 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2021-01-06 18:46:12 +0000 UTC 2021-01-06 18:46:12 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan  6 18:46:13.290: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-d96b6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d96b6/replicasets/nginx-deployment-5c98f8fb5,UID:7165812e-504f-11eb-8302-0242ac120002,ResourceVersion:18067229,Generation:3,CreationTimestamp:2021-01-06 18:46:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 6a36f34e-504f-11eb-8302-0242ac120002 0xc001a61aa7 0xc001a61aa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  6 18:46:13.290: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan  6 18:46:13.290: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-d96b6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d96b6/replicasets/nginx-deployment-85ddf47c5d,UID:6a3a1839-504f-11eb-8302-0242ac120002,ResourceVersion:18067223,Generation:3,CreationTimestamp:2021-01-06 18:45:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 6a36f34e-504f-11eb-8302-0242ac120002 0xc001a61b67 0xc001a61b68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan  6 18:46:13.360: INFO: Pod "nginx-deployment-5c98f8fb5-6lfc5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6lfc5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-5c98f8fb5-6lfc5,UID:731e3956-504f-11eb-8302-0242ac120002,ResourceVersion:18067212,Generation:0,CreationTimestamp:2021-01-06 18:46:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7165812e-504f-11eb-8302-0242ac120002 0xc002262507 0xc002262508}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002262580} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022625b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:13 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.360: INFO: Pod "nginx-deployment-5c98f8fb5-7rmhr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7rmhr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-5c98f8fb5-7rmhr,UID:733b2483-504f-11eb-8302-0242ac120002,ResourceVersion:18067228,Generation:0,CreationTimestamp:2021-01-06 18:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7165812e-504f-11eb-8302-0242ac120002 0xc002262627 0xc002262628}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022626a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022626c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:13 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.360: INFO: Pod "nginx-deployment-5c98f8fb5-9n59v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9n59v,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-5c98f8fb5-9n59v,UID:733b1108-504f-11eb-8302-0242ac120002,ResourceVersion:18067227,Generation:0,CreationTimestamp:2021-01-06 18:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7165812e-504f-11eb-8302-0242ac120002 0xc002262747 0xc002262748}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002262810} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002262830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:13 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.361: INFO: Pod "nginx-deployment-5c98f8fb5-bbwdb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bbwdb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-5c98f8fb5-bbwdb,UID:71668f6a-504f-11eb-8302-0242ac120002,ResourceVersion:18067139,Generation:0,CreationTimestamp:2021-01-06 18:46:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7165812e-504f-11eb-8302-0242ac120002 0xc002262957 0xc002262958}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022629e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002262a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:10 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-01-06 18:46:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.361: INFO: Pod "nginx-deployment-5c98f8fb5-dw78p" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dw78p,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-5c98f8fb5-dw78p,UID:7167b733-504f-11eb-8302-0242ac120002,ResourceVersion:18067148,Generation:0,CreationTimestamp:2021-01-06 18:46:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7165812e-504f-11eb-8302-0242ac120002 0xc002262b87 0xc002262b88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002262cb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002262cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:10 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-01-06 18:46:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.361: INFO: Pod "nginx-deployment-5c98f8fb5-f7jcv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-f7jcv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-5c98f8fb5-f7jcv,UID:718e9ce7-504f-11eb-8302-0242ac120002,ResourceVersion:18067162,Generation:0,CreationTimestamp:2021-01-06 18:46:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7165812e-504f-11eb-8302-0242ac120002 0xc002262d97 0xc002262d98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002262e10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002262e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:10 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-01-06 18:46:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.361: INFO: Pod "nginx-deployment-5c98f8fb5-g22vl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-g22vl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-5c98f8fb5-g22vl,UID:731e4fd8-504f-11eb-8302-0242ac120002,ResourceVersion:18067207,Generation:0,CreationTimestamp:2021-01-06 18:46:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7165812e-504f-11eb-8302-0242ac120002 0xc002262f37 0xc002262f38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002262fb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002262fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:13 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.361: INFO: Pod "nginx-deployment-5c98f8fb5-gp9gf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gp9gf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-5c98f8fb5-gp9gf,UID:7348261c-504f-11eb-8302-0242ac120002,ResourceVersion:18067231,Generation:0,CreationTimestamp:2021-01-06 18:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7165812e-504f-11eb-8302-0242ac120002 0xc002263047 0xc002263048}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002263180} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022631a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:13 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.361: INFO: Pod "nginx-deployment-5c98f8fb5-hqwrh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hqwrh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-5c98f8fb5-hqwrh,UID:733b462d-504f-11eb-8302-0242ac120002,ResourceVersion:18067224,Generation:0,CreationTimestamp:2021-01-06 18:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7165812e-504f-11eb-8302-0242ac120002 0xc002263307 0xc002263308}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002263380} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022633a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:13 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.362: INFO: Pod "nginx-deployment-5c98f8fb5-jrngv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jrngv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-5c98f8fb5-jrngv,UID:718b0ff4-504f-11eb-8302-0242ac120002,ResourceVersion:18067161,Generation:0,CreationTimestamp:2021-01-06 18:46:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7165812e-504f-11eb-8302-0242ac120002 0xc002263417 0xc002263418}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022634c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022634e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:10 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-01-06 18:46:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.362: INFO: Pod "nginx-deployment-5c98f8fb5-sbxbk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sbxbk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-5c98f8fb5-sbxbk,UID:71678f99-504f-11eb-8302-0242ac120002,ResourceVersion:18067145,Generation:0,CreationTimestamp:2021-01-06 18:46:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7165812e-504f-11eb-8302-0242ac120002 0xc0022635a7 0xc0022635a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002263640} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002263660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:10 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-01-06 18:46:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.362: INFO: Pod "nginx-deployment-5c98f8fb5-tn4cd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tn4cd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-5c98f8fb5-tn4cd,UID:72c20f13-504f-11eb-8302-0242ac120002,ResourceVersion:18067194,Generation:0,CreationTimestamp:2021-01-06 18:46:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7165812e-504f-11eb-8302-0242ac120002 0xc002263737 0xc002263738}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022637b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022637d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:12 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.362: INFO: Pod "nginx-deployment-5c98f8fb5-zkq2q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zkq2q,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-5c98f8fb5-zkq2q,UID:733b4d2e-504f-11eb-8302-0242ac120002,ResourceVersion:18067226,Generation:0,CreationTimestamp:2021-01-06 18:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7165812e-504f-11eb-8302-0242ac120002 0xc002263857 0xc002263858}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022638d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022638f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:13 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.362: INFO: Pod "nginx-deployment-85ddf47c5d-4hjdc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4hjdc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-85ddf47c5d-4hjdc,UID:6a407344-504f-11eb-8302-0242ac120002,ResourceVersion:18067086,Generation:0,CreationTimestamp:2021-01-06 18:45:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a3a1839-504f-11eb-8302-0242ac120002 0xc002263967 0xc002263968}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002263ae0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002263b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:45:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:45:58 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.91,StartTime:2021-01-06 18:45:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-06 18:46:07 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://52bbde4ed28b36fe3af3a4fcaf4be50e5f857f441598b9509d0e0739a122ca24}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.362: INFO: Pod "nginx-deployment-85ddf47c5d-5dn8f" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5dn8f,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-85ddf47c5d-5dn8f,UID:731f37fe-504f-11eb-8302-0242ac120002,ResourceVersion:18067208,Generation:0,CreationTimestamp:2021-01-06 18:46:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a3a1839-504f-11eb-8302-0242ac120002 0xc002263bc7 0xc002263bc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002263d20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002263d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:13 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.362: INFO: Pod "nginx-deployment-85ddf47c5d-5q9jk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5q9jk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-85ddf47c5d-5q9jk,UID:731f2d42-504f-11eb-8302-0242ac120002,ResourceVersion:18067211,Generation:0,CreationTimestamp:2021-01-06 18:46:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a3a1839-504f-11eb-8302-0242ac120002 0xc002263db7 0xc002263db8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002263e30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002263e50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:13 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.363: INFO: Pod "nginx-deployment-85ddf47c5d-79lkx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-79lkx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-85ddf47c5d-79lkx,UID:731f2edf-504f-11eb-8302-0242ac120002,ResourceVersion:18067210,Generation:0,CreationTimestamp:2021-01-06 18:46:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a3a1839-504f-11eb-8302-0242ac120002 0xc002263f37 0xc002263f38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002263fb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002263fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:13 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.363: INFO: Pod "nginx-deployment-85ddf47c5d-7lhps" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7lhps,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-85ddf47c5d-7lhps,UID:6a406e1e-504f-11eb-8302-0242ac120002,ResourceVersion:18067107,Generation:0,CreationTimestamp:2021-01-06 18:45:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a3a1839-504f-11eb-8302-0242ac120002 0xc002b444d7 0xc002b444d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b44550} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b44570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:45:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:45:58 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.92,StartTime:2021-01-06 18:45:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-06 18:46:07 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9ae8fabdd9e857de730da040d4b15da606570d5a71370452caf5de7fb26063ef}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.363: INFO: Pod "nginx-deployment-85ddf47c5d-9654c" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9654c,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-85ddf47c5d-9654c,UID:6a4ac65f-504f-11eb-8302-0242ac120002,ResourceVersion:18067105,Generation:0,CreationTimestamp:2021-01-06 18:45:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a3a1839-504f-11eb-8302-0242ac120002 0xc002b44907 0xc002b44908}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b44980} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b449a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:45:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:45:58 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.93,StartTime:2021-01-06 18:45:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-06 18:46:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8a41d62be5a0651190f9ecc097c17e33c9d21db0b5bca8e0874130aef7ea35b2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.363: INFO: Pod "nginx-deployment-85ddf47c5d-bwh8j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bwh8j,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-85ddf47c5d-bwh8j,UID:72c268d8-504f-11eb-8302-0242ac120002,ResourceVersion:18067198,Generation:0,CreationTimestamp:2021-01-06 18:46:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a3a1839-504f-11eb-8302-0242ac120002 0xc002b44da7 0xc002b44da8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b44e20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b44e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:12 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.363: INFO: Pod "nginx-deployment-85ddf47c5d-c2pmk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-c2pmk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-85ddf47c5d-c2pmk,UID:6a3f732e-504f-11eb-8302-0242ac120002,ResourceVersion:18067069,Generation:0,CreationTimestamp:2021-01-06 18:45:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a3a1839-504f-11eb-8302-0242ac120002 0xc002b44f37 0xc002b44f38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b45020} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b45040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:45:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:45:58 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.90,StartTime:2021-01-06 18:45:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-06 18:46:05 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://79483a99e189bcbdc9088cab52a0c328de137b270c0d87996d6302974dbadfee}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.363: INFO: Pod "nginx-deployment-85ddf47c5d-cbld2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cbld2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-85ddf47c5d-cbld2,UID:72be76bc-504f-11eb-8302-0242ac120002,ResourceVersion:18067186,Generation:0,CreationTimestamp:2021-01-06 18:46:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a3a1839-504f-11eb-8302-0242ac120002 0xc002b451b7 0xc002b451b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b45240} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b45260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:12 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.363: INFO: Pod "nginx-deployment-85ddf47c5d-g5mjh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-g5mjh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-85ddf47c5d-g5mjh,UID:72be6e55-504f-11eb-8302-0242ac120002,ResourceVersion:18067238,Generation:0,CreationTimestamp:2021-01-06 18:46:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a3a1839-504f-11eb-8302-0242ac120002 0xc002b45367 0xc002b45368}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b453e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b45400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:12 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-01-06 18:46:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.364: INFO: Pod "nginx-deployment-85ddf47c5d-kcc77" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kcc77,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-85ddf47c5d-kcc77,UID:731f321b-504f-11eb-8302-0242ac120002,ResourceVersion:18067209,Generation:0,CreationTimestamp:2021-01-06 18:46:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a3a1839-504f-11eb-8302-0242ac120002 0xc002b45577 0xc002b45578}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b45620} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b45640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:13 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.364: INFO: Pod "nginx-deployment-85ddf47c5d-l526r" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-l526r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-85ddf47c5d-l526r,UID:6a406c72-504f-11eb-8302-0242ac120002,ResourceVersion:18067077,Generation:0,CreationTimestamp:2021-01-06 18:45:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a3a1839-504f-11eb-8302-0242ac120002 0xc002b456c7 0xc002b456c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b45740} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b45760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:45:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:45:58 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.1.136,StartTime:2021-01-06 18:45:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-06 18:46:05 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e1934b29be9f9e8cdeea2e5466c897edd406ec31d39f07d589abfd0e4b0792fb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.364: INFO: Pod "nginx-deployment-85ddf47c5d-n94l5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n94l5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-85ddf47c5d-n94l5,UID:72c25e58-504f-11eb-8302-0242ac120002,ResourceVersion:18067233,Generation:0,CreationTimestamp:2021-01-06 18:46:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a3a1839-504f-11eb-8302-0242ac120002 0xc002b458a7 0xc002b458a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b45920} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b45940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:12 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-01-06 18:46:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.365: INFO: Pod "nginx-deployment-85ddf47c5d-ngv8w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ngv8w,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-85ddf47c5d-ngv8w,UID:72c2720a-504f-11eb-8302-0242ac120002,ResourceVersion:18067197,Generation:0,CreationTimestamp:2021-01-06 18:46:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a3a1839-504f-11eb-8302-0242ac120002 0xc002b45a07 0xc002b45a08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b45a80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b45aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:12 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.365: INFO: Pod "nginx-deployment-85ddf47c5d-r7l66" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-r7l66,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-85ddf47c5d-r7l66,UID:6a3f6d07-504f-11eb-8302-0242ac120002,ResourceVersion:18067073,Generation:0,CreationTimestamp:2021-01-06 18:45:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a3a1839-504f-11eb-8302-0242ac120002 0xc002b45b17 0xc002b45b18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b45ba0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b45bc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:45:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:45:58 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.1.135,StartTime:2021-01-06 18:45:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-06 18:46:05 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://478c1e8ca246b2bcbd81d33336fb254faba3ca2d29efc5e26821fe6cdbd26033}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.365: INFO: Pod "nginx-deployment-85ddf47c5d-rfjq4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rfjq4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-85ddf47c5d-rfjq4,UID:6a4a6e63-504f-11eb-8302-0242ac120002,ResourceVersion:18067101,Generation:0,CreationTimestamp:2021-01-06 18:45:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a3a1839-504f-11eb-8302-0242ac120002 0xc002b45c87 0xc002b45c88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b45d10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b45d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:45:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:45:58 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.1.139,StartTime:2021-01-06 18:45:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-06 18:46:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7e11ba0b9054b40ce1bab857ba4d0c29099e43228c241638f7d5da17bf65af41}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.365: INFO: Pod "nginx-deployment-85ddf47c5d-rwsjr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rwsjr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-85ddf47c5d-rwsjr,UID:72bdf890-504f-11eb-8302-0242ac120002,ResourceVersion:18067221,Generation:0,CreationTimestamp:2021-01-06 18:46:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a3a1839-504f-11eb-8302-0242ac120002 0xc002b45df7 0xc002b45df8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b45e80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b45ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:12 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-01-06 18:46:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.365: INFO: Pod "nginx-deployment-85ddf47c5d-sgbtw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sgbtw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-85ddf47c5d-sgbtw,UID:731f2fe5-504f-11eb-8302-0242ac120002,ResourceVersion:18067213,Generation:0,CreationTimestamp:2021-01-06 18:46:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a3a1839-504f-11eb-8302-0242ac120002 0xc002b45f57 0xc002b45f58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b45fd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b45ff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:13 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.366: INFO: Pod "nginx-deployment-85ddf47c5d-tqhsd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tqhsd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-85ddf47c5d-tqhsd,UID:72c26b83-504f-11eb-8302-0242ac120002,ResourceVersion:18067199,Generation:0,CreationTimestamp:2021-01-06 18:46:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a3a1839-504f-11eb-8302-0242ac120002 0xc002aee067 0xc002aee068}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aee0e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aee2b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:12 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 18:46:13.366: INFO: Pod "nginx-deployment-85ddf47c5d-vgkg4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vgkg4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d96b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d96b6/pods/nginx-deployment-85ddf47c5d-vgkg4,UID:6a3eadb7-504f-11eb-8302-0242ac120002,ResourceVersion:18067050,Generation:0,CreationTimestamp:2021-01-06 18:45:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a3a1839-504f-11eb-8302-0242ac120002 0xc002aee337 0xc002aee338}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzrw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzrw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bzrw4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aee3b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aee3d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:45:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:46:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-06 18:45:58 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.89,StartTime:2021-01-06 18:45:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-06 18:46:01 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://969e5209774c112e17bd3cc60f60fcc536a8b96b3c8304014cb679385910635b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:46:13.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-d96b6" for this suite.
Jan  6 18:46:41.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:46:41.604: INFO: namespace: e2e-tests-deployment-d96b6, resource: bindings, ignored listing per whitelist
Jan  6 18:46:41.633: INFO: namespace e2e-tests-deployment-d96b6 deletion completed in 28.144747565s

• [SLOW TEST:43.781 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:46:41.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-8460fb3b-504f-11eb-8655-0242ac110009
STEP: Creating configMap with name cm-test-opt-upd-8460fb9e-504f-11eb-8655-0242ac110009
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-8460fb3b-504f-11eb-8655-0242ac110009
STEP: Updating configmap cm-test-opt-upd-8460fb9e-504f-11eb-8655-0242ac110009
STEP: Creating configMap with name cm-test-opt-create-8460fbc3-504f-11eb-8655-0242ac110009
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:46:52.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-4cvmp" for this suite.
Jan  6 18:47:14.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:47:14.442: INFO: namespace: e2e-tests-configmap-4cvmp, resource: bindings, ignored listing per whitelist
Jan  6 18:47:14.461: INFO: namespace e2e-tests-configmap-4cvmp deletion completed in 22.102958885s

• [SLOW TEST:32.827 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:47:14.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
Jan  6 18:47:26.271: INFO: 5 pods remaining
Jan  6 18:47:26.271: INFO: 5 pods has nil DeletionTimestamp
Jan  6 18:47:26.271: INFO: 
STEP: Gathering metrics
W0106 18:47:31.242167       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  6 18:47:31.242: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:47:31.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-cxqs5" for this suite.
Jan  6 18:47:39.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:47:39.282: INFO: namespace: e2e-tests-gc-cxqs5, resource: bindings, ignored listing per whitelist
Jan  6 18:47:39.355: INFO: namespace e2e-tests-gc-cxqs5 deletion completed in 8.109106106s

• [SLOW TEST:24.893 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:47:39.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-a6c4ea91-504f-11eb-8655-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  6 18:47:39.609: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a6ca3254-504f-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-w92vz" to be "success or failure"
Jan  6 18:47:39.613: INFO: Pod "pod-projected-configmaps-a6ca3254-504f-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.92436ms
Jan  6 18:47:41.617: INFO: Pod "pod-projected-configmaps-a6ca3254-504f-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008478407s
Jan  6 18:47:43.621: INFO: Pod "pod-projected-configmaps-a6ca3254-504f-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012087846s
STEP: Saw pod success
Jan  6 18:47:43.621: INFO: Pod "pod-projected-configmaps-a6ca3254-504f-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:47:43.623: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-a6ca3254-504f-11eb-8655-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  6 18:47:43.654: INFO: Waiting for pod pod-projected-configmaps-a6ca3254-504f-11eb-8655-0242ac110009 to disappear
Jan  6 18:47:43.667: INFO: Pod pod-projected-configmaps-a6ca3254-504f-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:47:43.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-w92vz" for this suite.
Jan  6 18:47:49.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:47:49.742: INFO: namespace: e2e-tests-projected-w92vz, resource: bindings, ignored listing per whitelist
Jan  6 18:47:49.771: INFO: namespace e2e-tests-projected-w92vz deletion completed in 6.101647217s

• [SLOW TEST:10.417 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:47:49.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-acfba9c6-504f-11eb-8655-0242ac110009
STEP: Creating a pod to test consume secrets
Jan  6 18:47:50.060: INFO: Waiting up to 5m0s for pod "pod-secrets-ad000a13-504f-11eb-8655-0242ac110009" in namespace "e2e-tests-secrets-bjz7c" to be "success or failure"
Jan  6 18:47:50.075: INFO: Pod "pod-secrets-ad000a13-504f-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 14.841781ms
Jan  6 18:47:52.078: INFO: Pod "pod-secrets-ad000a13-504f-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018448863s
Jan  6 18:47:54.083: INFO: Pod "pod-secrets-ad000a13-504f-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023132707s
STEP: Saw pod success
Jan  6 18:47:54.083: INFO: Pod "pod-secrets-ad000a13-504f-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:47:54.087: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-ad000a13-504f-11eb-8655-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Jan  6 18:47:54.106: INFO: Waiting for pod pod-secrets-ad000a13-504f-11eb-8655-0242ac110009 to disappear
Jan  6 18:47:54.134: INFO: Pod pod-secrets-ad000a13-504f-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:47:54.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-bjz7c" for this suite.
Jan  6 18:48:00.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:48:00.235: INFO: namespace: e2e-tests-secrets-bjz7c, resource: bindings, ignored listing per whitelist
Jan  6 18:48:00.240: INFO: namespace e2e-tests-secrets-bjz7c deletion completed in 6.102095314s

• [SLOW TEST:10.469 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:48:00.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-b32f2b17-504f-11eb-8655-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  6 18:48:00.401: INFO: Waiting up to 5m0s for pod "pod-configmaps-b33165b1-504f-11eb-8655-0242ac110009" in namespace "e2e-tests-configmap-9kvww" to be "success or failure"
Jan  6 18:48:00.447: INFO: Pod "pod-configmaps-b33165b1-504f-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 46.901293ms
Jan  6 18:48:02.451: INFO: Pod "pod-configmaps-b33165b1-504f-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050639412s
Jan  6 18:48:04.455: INFO: Pod "pod-configmaps-b33165b1-504f-11eb-8655-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.054501481s
Jan  6 18:48:06.460: INFO: Pod "pod-configmaps-b33165b1-504f-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058939821s
STEP: Saw pod success
Jan  6 18:48:06.460: INFO: Pod "pod-configmaps-b33165b1-504f-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:48:06.463: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-b33165b1-504f-11eb-8655-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Jan  6 18:48:06.494: INFO: Waiting for pod pod-configmaps-b33165b1-504f-11eb-8655-0242ac110009 to disappear
Jan  6 18:48:06.511: INFO: Pod pod-configmaps-b33165b1-504f-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:48:06.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9kvww" for this suite.
Jan  6 18:48:14.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:48:14.607: INFO: namespace: e2e-tests-configmap-9kvww, resource: bindings, ignored listing per whitelist
Jan  6 18:48:14.623: INFO: namespace e2e-tests-configmap-9kvww deletion completed in 8.107821721s

• [SLOW TEST:14.382 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:48:14.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-thjh
STEP: Creating a pod to test atomic-volume-subpath
Jan  6 18:48:14.795: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-thjh" in namespace "e2e-tests-subpath-9l24q" to be "success or failure"
Jan  6 18:48:14.843: INFO: Pod "pod-subpath-test-downwardapi-thjh": Phase="Pending", Reason="", readiness=false. Elapsed: 47.685564ms
Jan  6 18:48:16.879: INFO: Pod "pod-subpath-test-downwardapi-thjh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083854606s
Jan  6 18:48:18.884: INFO: Pod "pod-subpath-test-downwardapi-thjh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088300193s
Jan  6 18:48:20.888: INFO: Pod "pod-subpath-test-downwardapi-thjh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092714957s
Jan  6 18:48:22.891: INFO: Pod "pod-subpath-test-downwardapi-thjh": Phase="Running", Reason="", readiness=false. Elapsed: 8.095974648s
Jan  6 18:48:24.896: INFO: Pod "pod-subpath-test-downwardapi-thjh": Phase="Running", Reason="", readiness=false. Elapsed: 10.100291614s
Jan  6 18:48:26.899: INFO: Pod "pod-subpath-test-downwardapi-thjh": Phase="Running", Reason="", readiness=false. Elapsed: 12.103549587s
Jan  6 18:48:28.904: INFO: Pod "pod-subpath-test-downwardapi-thjh": Phase="Running", Reason="", readiness=false. Elapsed: 14.109028506s
Jan  6 18:48:30.909: INFO: Pod "pod-subpath-test-downwardapi-thjh": Phase="Running", Reason="", readiness=false. Elapsed: 16.113439831s
Jan  6 18:48:32.912: INFO: Pod "pod-subpath-test-downwardapi-thjh": Phase="Running", Reason="", readiness=false. Elapsed: 18.116690714s
Jan  6 18:48:34.917: INFO: Pod "pod-subpath-test-downwardapi-thjh": Phase="Running", Reason="", readiness=false. Elapsed: 20.121153302s
Jan  6 18:48:36.920: INFO: Pod "pod-subpath-test-downwardapi-thjh": Phase="Running", Reason="", readiness=false. Elapsed: 22.124750573s
Jan  6 18:48:38.925: INFO: Pod "pod-subpath-test-downwardapi-thjh": Phase="Running", Reason="", readiness=false. Elapsed: 24.129233942s
Jan  6 18:48:40.929: INFO: Pod "pod-subpath-test-downwardapi-thjh": Phase="Running", Reason="", readiness=false. Elapsed: 26.13396141s
Jan  6 18:48:42.934: INFO: Pod "pod-subpath-test-downwardapi-thjh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.138160109s
STEP: Saw pod success
Jan  6 18:48:42.934: INFO: Pod "pod-subpath-test-downwardapi-thjh" satisfied condition "success or failure"
Jan  6 18:48:42.937: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-thjh container test-container-subpath-downwardapi-thjh: 
STEP: delete the pod
Jan  6 18:48:43.031: INFO: Waiting for pod pod-subpath-test-downwardapi-thjh to disappear
Jan  6 18:48:43.039: INFO: Pod pod-subpath-test-downwardapi-thjh no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-thjh
Jan  6 18:48:43.039: INFO: Deleting pod "pod-subpath-test-downwardapi-thjh" in namespace "e2e-tests-subpath-9l24q"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:48:43.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-9l24q" for this suite.
Jan  6 18:48:49.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:48:49.202: INFO: namespace: e2e-tests-subpath-9l24q, resource: bindings, ignored listing per whitelist
Jan  6 18:48:49.215: INFO: namespace e2e-tests-subpath-9l24q deletion completed in 6.171289855s

• [SLOW TEST:34.592 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:48:49.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-s7cxb
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  6 18:48:49.359: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  6 18:49:13.571: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.115:8080/dial?request=hostName&protocol=udp&host=10.244.1.160&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-s7cxb PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 18:49:13.571: INFO: >>> kubeConfig: /root/.kube/config
I0106 18:49:13.605147       6 log.go:172] (0xc001a444d0) (0xc0025b7540) Create stream
I0106 18:49:13.605187       6 log.go:172] (0xc001a444d0) (0xc0025b7540) Stream added, broadcasting: 1
I0106 18:49:13.606958       6 log.go:172] (0xc001a444d0) Reply frame received for 1
I0106 18:49:13.606998       6 log.go:172] (0xc001a444d0) (0xc002929d60) Create stream
I0106 18:49:13.607012       6 log.go:172] (0xc001a444d0) (0xc002929d60) Stream added, broadcasting: 3
I0106 18:49:13.607942       6 log.go:172] (0xc001a444d0) Reply frame received for 3
I0106 18:49:13.607983       6 log.go:172] (0xc001a444d0) (0xc001507400) Create stream
I0106 18:49:13.608002       6 log.go:172] (0xc001a444d0) (0xc001507400) Stream added, broadcasting: 5
I0106 18:49:13.609252       6 log.go:172] (0xc001a444d0) Reply frame received for 5
I0106 18:49:13.695646       6 log.go:172] (0xc001a444d0) Data frame received for 3
I0106 18:49:13.695693       6 log.go:172] (0xc002929d60) (3) Data frame handling
I0106 18:49:13.695730       6 log.go:172] (0xc002929d60) (3) Data frame sent
I0106 18:49:13.696188       6 log.go:172] (0xc001a444d0) Data frame received for 5
I0106 18:49:13.696209       6 log.go:172] (0xc001507400) (5) Data frame handling
I0106 18:49:13.696288       6 log.go:172] (0xc001a444d0) Data frame received for 3
I0106 18:49:13.696303       6 log.go:172] (0xc002929d60) (3) Data frame handling
I0106 18:49:13.698276       6 log.go:172] (0xc001a444d0) Data frame received for 1
I0106 18:49:13.698291       6 log.go:172] (0xc0025b7540) (1) Data frame handling
I0106 18:49:13.698300       6 log.go:172] (0xc0025b7540) (1) Data frame sent
I0106 18:49:13.698318       6 log.go:172] (0xc001a444d0) (0xc0025b7540) Stream removed, broadcasting: 1
I0106 18:49:13.698338       6 log.go:172] (0xc001a444d0) Go away received
I0106 18:49:13.698421       6 log.go:172] (0xc001a444d0) (0xc0025b7540) Stream removed, broadcasting: 1
I0106 18:49:13.698442       6 log.go:172] (0xc001a444d0) (0xc002929d60) Stream removed, broadcasting: 3
I0106 18:49:13.698452       6 log.go:172] (0xc001a444d0) (0xc001507400) Stream removed, broadcasting: 5
Jan  6 18:49:13.698: INFO: Waiting for endpoints: map[]
Jan  6 18:49:13.701: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.115:8080/dial?request=hostName&protocol=udp&host=10.244.2.114&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-s7cxb PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 18:49:13.701: INFO: >>> kubeConfig: /root/.kube/config
I0106 18:49:13.737565       6 log.go:172] (0xc001b3a2c0) (0xc0017677c0) Create stream
I0106 18:49:13.737605       6 log.go:172] (0xc001b3a2c0) (0xc0017677c0) Stream added, broadcasting: 1
I0106 18:49:13.740070       6 log.go:172] (0xc001b3a2c0) Reply frame received for 1
I0106 18:49:13.740096       6 log.go:172] (0xc001b3a2c0) (0xc0015b8280) Create stream
I0106 18:49:13.740102       6 log.go:172] (0xc001b3a2c0) (0xc0015b8280) Stream added, broadcasting: 3
I0106 18:49:13.741067       6 log.go:172] (0xc001b3a2c0) Reply frame received for 3
I0106 18:49:13.741114       6 log.go:172] (0xc001b3a2c0) (0xc0025b75e0) Create stream
I0106 18:49:13.741139       6 log.go:172] (0xc001b3a2c0) (0xc0025b75e0) Stream added, broadcasting: 5
I0106 18:49:13.742393       6 log.go:172] (0xc001b3a2c0) Reply frame received for 5
I0106 18:49:13.806962       6 log.go:172] (0xc001b3a2c0) Data frame received for 3
I0106 18:49:13.807027       6 log.go:172] (0xc0015b8280) (3) Data frame handling
I0106 18:49:13.807045       6 log.go:172] (0xc0015b8280) (3) Data frame sent
I0106 18:49:13.808049       6 log.go:172] (0xc001b3a2c0) Data frame received for 3
I0106 18:49:13.808067       6 log.go:172] (0xc0015b8280) (3) Data frame handling
I0106 18:49:13.808106       6 log.go:172] (0xc001b3a2c0) Data frame received for 5
I0106 18:49:13.808137       6 log.go:172] (0xc0025b75e0) (5) Data frame handling
I0106 18:49:13.810004       6 log.go:172] (0xc001b3a2c0) Data frame received for 1
I0106 18:49:13.810024       6 log.go:172] (0xc0017677c0) (1) Data frame handling
I0106 18:49:13.810040       6 log.go:172] (0xc0017677c0) (1) Data frame sent
I0106 18:49:13.810048       6 log.go:172] (0xc001b3a2c0) (0xc0017677c0) Stream removed, broadcasting: 1
I0106 18:49:13.810113       6 log.go:172] (0xc001b3a2c0) Go away received
I0106 18:49:13.810143       6 log.go:172] (0xc001b3a2c0) (0xc0017677c0) Stream removed, broadcasting: 1
I0106 18:49:13.810159       6 log.go:172] (0xc001b3a2c0) (0xc0015b8280) Stream removed, broadcasting: 3
I0106 18:49:13.810166       6 log.go:172] (0xc001b3a2c0) (0xc0025b75e0) Stream removed, broadcasting: 5
Jan  6 18:49:13.810: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:49:13.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-s7cxb" for this suite.
Jan  6 18:49:37.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:49:37.878: INFO: namespace: e2e-tests-pod-network-test-s7cxb, resource: bindings, ignored listing per whitelist
Jan  6 18:49:37.950: INFO: namespace e2e-tests-pod-network-test-s7cxb deletion completed in 24.135608323s

• [SLOW TEST:48.734 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:49:37.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jan  6 18:49:42.137: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-ed696a80-504f-11eb-8655-0242ac110009", GenerateName:"", Namespace:"e2e-tests-pods-rg6vk", SelfLink:"/api/v1/namespaces/e2e-tests-pods-rg6vk/pods/pod-submit-remove-ed696a80-504f-11eb-8655-0242ac110009", UID:"ed6a6632-504f-11eb-8302-0242ac120002", ResourceVersion:"18068380", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63745555778, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"67939996"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-9mfcm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0017c8fc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9mfcm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0026a8958), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001334960), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0026a8ae0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0026a8b00)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0026a8b08), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0026a8b0c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745555778, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745555781, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745555781, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745555778, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"10.244.1.161", StartTime:(*v1.Time)(0xc0011ef6e0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0011ef700), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://c314bafb92e906779f9c6048b273fe3a8abd2286ec082d908221a1cfedd019e6"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jan  6 18:49:47.158: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:49:47.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-rg6vk" for this suite.
Jan  6 18:49:53.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:49:53.280: INFO: namespace: e2e-tests-pods-rg6vk, resource: bindings, ignored listing per whitelist
Jan  6 18:49:53.285: INFO: namespace e2e-tests-pods-rg6vk deletion completed in 6.120904824s

• [SLOW TEST:15.335 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:49:53.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-qlgrm
Jan  6 18:49:59.398: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-qlgrm
STEP: checking the pod's current state and verifying that restartCount is present
Jan  6 18:49:59.400: INFO: Initial restart count of pod liveness-exec is 0
Jan  6 18:50:45.497: INFO: Restart count of pod e2e-tests-container-probe-qlgrm/liveness-exec is now 1 (46.096816187s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:50:45.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-qlgrm" for this suite.
Jan  6 18:50:51.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:50:51.548: INFO: namespace: e2e-tests-container-probe-qlgrm, resource: bindings, ignored listing per whitelist
Jan  6 18:50:51.611: INFO: namespace e2e-tests-container-probe-qlgrm deletion completed in 6.095221311s

• [SLOW TEST:58.325 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:50:51.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jan  6 18:50:51.781: INFO: Waiting up to 5m0s for pod "var-expansion-194a52ef-5050-11eb-8655-0242ac110009" in namespace "e2e-tests-var-expansion-lgtgq" to be "success or failure"
Jan  6 18:50:51.785: INFO: Pod "var-expansion-194a52ef-5050-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.710943ms
Jan  6 18:50:53.789: INFO: Pod "var-expansion-194a52ef-5050-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008062217s
Jan  6 18:50:55.792: INFO: Pod "var-expansion-194a52ef-5050-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011300877s
STEP: Saw pod success
Jan  6 18:50:55.792: INFO: Pod "var-expansion-194a52ef-5050-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:50:55.795: INFO: Trying to get logs from node hunter-worker pod var-expansion-194a52ef-5050-11eb-8655-0242ac110009 container dapi-container: 
STEP: delete the pod
Jan  6 18:50:55.941: INFO: Waiting for pod var-expansion-194a52ef-5050-11eb-8655-0242ac110009 to disappear
Jan  6 18:50:55.946: INFO: Pod var-expansion-194a52ef-5050-11eb-8655-0242ac110009 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:50:55.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-lgtgq" for this suite.
Jan  6 18:51:02.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:51:02.027: INFO: namespace: e2e-tests-var-expansion-lgtgq, resource: bindings, ignored listing per whitelist
Jan  6 18:51:02.104: INFO: namespace e2e-tests-var-expansion-lgtgq deletion completed in 6.154772751s

• [SLOW TEST:10.493 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:51:02.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  6 18:51:02.258: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f8f5458-5050-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-24lg7" to be "success or failure"
Jan  6 18:51:02.260: INFO: Pod "downwardapi-volume-1f8f5458-5050-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270936ms
Jan  6 18:51:04.265: INFO: Pod "downwardapi-volume-1f8f5458-5050-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006654418s
Jan  6 18:51:06.269: INFO: Pod "downwardapi-volume-1f8f5458-5050-11eb-8655-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.011382567s
Jan  6 18:51:08.274: INFO: Pod "downwardapi-volume-1f8f5458-5050-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015858303s
STEP: Saw pod success
Jan  6 18:51:08.274: INFO: Pod "downwardapi-volume-1f8f5458-5050-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:51:08.277: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-1f8f5458-5050-11eb-8655-0242ac110009 container client-container: 
STEP: delete the pod
Jan  6 18:51:08.331: INFO: Waiting for pod downwardapi-volume-1f8f5458-5050-11eb-8655-0242ac110009 to disappear
Jan  6 18:51:08.341: INFO: Pod downwardapi-volume-1f8f5458-5050-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:51:08.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-24lg7" for this suite.
Jan  6 18:51:14.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:51:14.473: INFO: namespace: e2e-tests-projected-24lg7, resource: bindings, ignored listing per whitelist
Jan  6 18:51:14.539: INFO: namespace e2e-tests-projected-24lg7 deletion completed in 6.195107179s

• [SLOW TEST:12.435 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:51:14.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-26f71ef0-5050-11eb-8655-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  6 18:51:14.752: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-26fc3406-5050-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-7zkcx" to be "success or failure"
Jan  6 18:51:14.781: INFO: Pod "pod-projected-configmaps-26fc3406-5050-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 28.853345ms
Jan  6 18:51:16.785: INFO: Pod "pod-projected-configmaps-26fc3406-5050-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032999699s
Jan  6 18:51:18.798: INFO: Pod "pod-projected-configmaps-26fc3406-5050-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045644308s
STEP: Saw pod success
Jan  6 18:51:18.798: INFO: Pod "pod-projected-configmaps-26fc3406-5050-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:51:18.801: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-26fc3406-5050-11eb-8655-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  6 18:51:18.829: INFO: Waiting for pod pod-projected-configmaps-26fc3406-5050-11eb-8655-0242ac110009 to disappear
Jan  6 18:51:18.862: INFO: Pod pod-projected-configmaps-26fc3406-5050-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:51:18.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7zkcx" for this suite.
Jan  6 18:51:24.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:51:24.944: INFO: namespace: e2e-tests-projected-7zkcx, resource: bindings, ignored listing per whitelist
Jan  6 18:51:24.955: INFO: namespace e2e-tests-projected-7zkcx deletion completed in 6.089269014s

• [SLOW TEST:10.415 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:51:24.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-2d2d7c70-5050-11eb-8655-0242ac110009
STEP: Creating a pod to test consume secrets
Jan  6 18:51:25.092: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2d2f5ff2-5050-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-xptvv" to be "success or failure"
Jan  6 18:51:25.097: INFO: Pod "pod-projected-secrets-2d2f5ff2-5050-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 5.008504ms
Jan  6 18:51:27.101: INFO: Pod "pod-projected-secrets-2d2f5ff2-5050-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00868497s
Jan  6 18:51:29.105: INFO: Pod "pod-projected-secrets-2d2f5ff2-5050-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013000795s
STEP: Saw pod success
Jan  6 18:51:29.105: INFO: Pod "pod-projected-secrets-2d2f5ff2-5050-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:51:29.109: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-2d2f5ff2-5050-11eb-8655-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Jan  6 18:51:29.236: INFO: Waiting for pod pod-projected-secrets-2d2f5ff2-5050-11eb-8655-0242ac110009 to disappear
Jan  6 18:51:29.265: INFO: Pod pod-projected-secrets-2d2f5ff2-5050-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:51:29.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xptvv" for this suite.
Jan  6 18:51:35.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:51:35.333: INFO: namespace: e2e-tests-projected-xptvv, resource: bindings, ignored listing per whitelist
Jan  6 18:51:35.372: INFO: namespace e2e-tests-projected-xptvv deletion completed in 6.10301372s

• [SLOW TEST:10.417 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:51:35.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-vnw5k/secret-test-335fe4b8-5050-11eb-8655-0242ac110009
STEP: Creating a pod to test consume secrets
Jan  6 18:51:35.486: INFO: Waiting up to 5m0s for pod "pod-configmaps-33616cf5-5050-11eb-8655-0242ac110009" in namespace "e2e-tests-secrets-vnw5k" to be "success or failure"
Jan  6 18:51:35.535: INFO: Pod "pod-configmaps-33616cf5-5050-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 48.757364ms
Jan  6 18:51:37.623: INFO: Pod "pod-configmaps-33616cf5-5050-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137513152s
Jan  6 18:51:39.628: INFO: Pod "pod-configmaps-33616cf5-5050-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.141762013s
STEP: Saw pod success
Jan  6 18:51:39.628: INFO: Pod "pod-configmaps-33616cf5-5050-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:51:39.630: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-33616cf5-5050-11eb-8655-0242ac110009 container env-test: 
STEP: delete the pod
Jan  6 18:51:39.677: INFO: Waiting for pod pod-configmaps-33616cf5-5050-11eb-8655-0242ac110009 to disappear
Jan  6 18:51:39.683: INFO: Pod pod-configmaps-33616cf5-5050-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:51:39.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-vnw5k" for this suite.
Jan  6 18:51:45.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:51:45.761: INFO: namespace: e2e-tests-secrets-vnw5k, resource: bindings, ignored listing per whitelist
Jan  6 18:51:45.879: INFO: namespace e2e-tests-secrets-vnw5k deletion completed in 6.192899297s

• [SLOW TEST:10.507 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:51:45.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-fdww2/configmap-test-39a5669b-5050-11eb-8655-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  6 18:51:46.003: INFO: Waiting up to 5m0s for pod "pod-configmaps-39a92db8-5050-11eb-8655-0242ac110009" in namespace "e2e-tests-configmap-fdww2" to be "success or failure"
Jan  6 18:51:46.020: INFO: Pod "pod-configmaps-39a92db8-5050-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.356246ms
Jan  6 18:51:48.024: INFO: Pod "pod-configmaps-39a92db8-5050-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020743517s
Jan  6 18:51:50.028: INFO: Pod "pod-configmaps-39a92db8-5050-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024841239s
STEP: Saw pod success
Jan  6 18:51:50.028: INFO: Pod "pod-configmaps-39a92db8-5050-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:51:50.031: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-39a92db8-5050-11eb-8655-0242ac110009 container env-test: 
STEP: delete the pod
Jan  6 18:51:50.292: INFO: Waiting for pod pod-configmaps-39a92db8-5050-11eb-8655-0242ac110009 to disappear
Jan  6 18:51:50.307: INFO: Pod pod-configmaps-39a92db8-5050-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:51:50.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-fdww2" for this suite.
Jan  6 18:51:56.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:51:56.335: INFO: namespace: e2e-tests-configmap-fdww2, resource: bindings, ignored listing per whitelist
Jan  6 18:51:56.453: INFO: namespace e2e-tests-configmap-fdww2 deletion completed in 6.143039012s

• [SLOW TEST:10.574 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:51:56.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  6 18:51:56.555: INFO: Waiting up to 5m0s for pod "pod-3ff40685-5050-11eb-8655-0242ac110009" in namespace "e2e-tests-emptydir-d7z29" to be "success or failure"
Jan  6 18:51:56.624: INFO: Pod "pod-3ff40685-5050-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 68.968042ms
Jan  6 18:51:58.627: INFO: Pod "pod-3ff40685-5050-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072237502s
Jan  6 18:52:00.631: INFO: Pod "pod-3ff40685-5050-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076361147s
STEP: Saw pod success
Jan  6 18:52:00.632: INFO: Pod "pod-3ff40685-5050-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:52:00.634: INFO: Trying to get logs from node hunter-worker2 pod pod-3ff40685-5050-11eb-8655-0242ac110009 container test-container: 
STEP: delete the pod
Jan  6 18:52:00.667: INFO: Waiting for pod pod-3ff40685-5050-11eb-8655-0242ac110009 to disappear
Jan  6 18:52:00.720: INFO: Pod pod-3ff40685-5050-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:52:00.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-d7z29" for this suite.
Jan  6 18:52:08.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:52:08.774: INFO: namespace: e2e-tests-emptydir-d7z29, resource: bindings, ignored listing per whitelist
Jan  6 18:52:08.822: INFO: namespace e2e-tests-emptydir-d7z29 deletion completed in 8.097863088s

• [SLOW TEST:12.369 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:52:08.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  6 18:52:08.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-l4m92'
Jan  6 18:52:12.549: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  6 18:52:12.549: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan  6 18:52:12.560: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-2m9ww]
Jan  6 18:52:12.560: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-2m9ww" in namespace "e2e-tests-kubectl-l4m92" to be "running and ready"
Jan  6 18:52:12.612: INFO: Pod "e2e-test-nginx-rc-2m9ww": Phase="Pending", Reason="", readiness=false. Elapsed: 52.167777ms
Jan  6 18:52:14.616: INFO: Pod "e2e-test-nginx-rc-2m9ww": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056493612s
Jan  6 18:52:16.621: INFO: Pod "e2e-test-nginx-rc-2m9ww": Phase="Running", Reason="", readiness=true. Elapsed: 4.06121901s
Jan  6 18:52:16.621: INFO: Pod "e2e-test-nginx-rc-2m9ww" satisfied condition "running and ready"
Jan  6 18:52:16.621: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-2m9ww]
Jan  6 18:52:16.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-l4m92'
Jan  6 18:52:16.763: INFO: stderr: ""
Jan  6 18:52:16.763: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jan  6 18:52:16.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-l4m92'
Jan  6 18:52:16.886: INFO: stderr: ""
Jan  6 18:52:16.886: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:52:16.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-l4m92" for this suite.
Jan  6 18:52:22.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:52:22.956: INFO: namespace: e2e-tests-kubectl-l4m92, resource: bindings, ignored listing per whitelist
Jan  6 18:52:22.985: INFO: namespace e2e-tests-kubectl-l4m92 deletion completed in 6.095147017s

• [SLOW TEST:14.162 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:52:22.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  6 18:52:23.078: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4fc1cb93-5050-11eb-8655-0242ac110009" in namespace "e2e-tests-downward-api-ffd27" to be "success or failure"
Jan  6 18:52:23.093: INFO: Pod "downwardapi-volume-4fc1cb93-5050-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 14.995053ms
Jan  6 18:52:25.097: INFO: Pod "downwardapi-volume-4fc1cb93-5050-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019206191s
Jan  6 18:52:27.100: INFO: Pod "downwardapi-volume-4fc1cb93-5050-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022599394s
STEP: Saw pod success
Jan  6 18:52:27.100: INFO: Pod "downwardapi-volume-4fc1cb93-5050-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:52:27.103: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-4fc1cb93-5050-11eb-8655-0242ac110009 container client-container: 
STEP: delete the pod
Jan  6 18:52:27.116: INFO: Waiting for pod downwardapi-volume-4fc1cb93-5050-11eb-8655-0242ac110009 to disappear
Jan  6 18:52:27.127: INFO: Pod downwardapi-volume-4fc1cb93-5050-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:52:27.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ffd27" for this suite.
Jan  6 18:52:33.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:52:33.286: INFO: namespace: e2e-tests-downward-api-ffd27, resource: bindings, ignored listing per whitelist
Jan  6 18:52:33.316: INFO: namespace e2e-tests-downward-api-ffd27 deletion completed in 6.185043442s

• [SLOW TEST:10.331 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:52:33.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  6 18:52:41.541: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  6 18:52:41.554: INFO: Pod pod-with-prestop-http-hook still exists
Jan  6 18:52:43.554: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  6 18:52:43.559: INFO: Pod pod-with-prestop-http-hook still exists
Jan  6 18:52:45.554: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  6 18:52:45.558: INFO: Pod pod-with-prestop-http-hook still exists
Jan  6 18:52:47.554: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  6 18:52:47.559: INFO: Pod pod-with-prestop-http-hook still exists
Jan  6 18:52:49.554: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  6 18:52:49.559: INFO: Pod pod-with-prestop-http-hook still exists
Jan  6 18:52:51.554: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  6 18:52:51.559: INFO: Pod pod-with-prestop-http-hook still exists
Jan  6 18:52:53.554: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  6 18:52:53.559: INFO: Pod pod-with-prestop-http-hook still exists
Jan  6 18:52:55.555: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  6 18:52:55.558: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:52:55.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-7bzb8" for this suite.
Jan  6 18:53:17.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:53:17.673: INFO: namespace: e2e-tests-container-lifecycle-hook-7bzb8, resource: bindings, ignored listing per whitelist
Jan  6 18:53:17.681: INFO: namespace e2e-tests-container-lifecycle-hook-7bzb8 deletion completed in 22.11568983s

• [SLOW TEST:44.365 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:53:17.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-6l9f5.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-6l9f5.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-6l9f5.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-6l9f5.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-6l9f5.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-6l9f5.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  6 18:53:23.905: INFO: DNS probes using e2e-tests-dns-6l9f5/dns-test-705f3ce8-5050-11eb-8655-0242ac110009 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:53:23.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-6l9f5" for this suite.
Jan  6 18:53:30.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:53:30.042: INFO: namespace: e2e-tests-dns-6l9f5, resource: bindings, ignored listing per whitelist
Jan  6 18:53:30.094: INFO: namespace e2e-tests-dns-6l9f5 deletion completed in 6.098612668s

• [SLOW TEST:12.413 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:53:30.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:53:34.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-rzvd2" for this suite.
Jan  6 18:53:40.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:53:40.435: INFO: namespace: e2e-tests-emptydir-wrapper-rzvd2, resource: bindings, ignored listing per whitelist
Jan  6 18:53:40.441: INFO: namespace e2e-tests-emptydir-wrapper-rzvd2 deletion completed in 6.113862106s

• [SLOW TEST:10.346 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:53:40.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-rgpcj
Jan  6 18:53:44.535: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-rgpcj
STEP: checking the pod's current state and verifying that restartCount is present
Jan  6 18:53:44.538: INFO: Initial restart count of pod liveness-http is 0
Jan  6 18:54:00.573: INFO: Restart count of pod e2e-tests-container-probe-rgpcj/liveness-http is now 1 (16.035785763s elapsed)
Jan  6 18:54:22.706: INFO: Restart count of pod e2e-tests-container-probe-rgpcj/liveness-http is now 2 (38.16827528s elapsed)
Jan  6 18:54:40.743: INFO: Restart count of pod e2e-tests-container-probe-rgpcj/liveness-http is now 3 (56.204981472s elapsed)
Jan  6 18:55:00.784: INFO: Restart count of pod e2e-tests-container-probe-rgpcj/liveness-http is now 4 (1m16.245962604s elapsed)
Jan  6 18:56:02.949: INFO: Restart count of pod e2e-tests-container-probe-rgpcj/liveness-http is now 5 (2m18.411406688s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:56:02.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-rgpcj" for this suite.
Jan  6 18:56:09.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:56:09.102: INFO: namespace: e2e-tests-container-probe-rgpcj, resource: bindings, ignored listing per whitelist
Jan  6 18:56:09.148: INFO: namespace e2e-tests-container-probe-rgpcj deletion completed in 6.159836964s

• [SLOW TEST:148.707 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:56:09.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-kv8rh
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  6 18:56:09.263: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  6 18:56:39.366: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.126 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-kv8rh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 18:56:39.366: INFO: >>> kubeConfig: /root/.kube/config
I0106 18:56:39.404526       6 log.go:172] (0xc001a442c0) (0xc002804e60) Create stream
I0106 18:56:39.404556       6 log.go:172] (0xc001a442c0) (0xc002804e60) Stream added, broadcasting: 1
I0106 18:56:39.407600       6 log.go:172] (0xc001a442c0) Reply frame received for 1
I0106 18:56:39.407646       6 log.go:172] (0xc001a442c0) (0xc0019a3a40) Create stream
I0106 18:56:39.407661       6 log.go:172] (0xc001a442c0) (0xc0019a3a40) Stream added, broadcasting: 3
I0106 18:56:39.408804       6 log.go:172] (0xc001a442c0) Reply frame received for 3
I0106 18:56:39.408949       6 log.go:172] (0xc001a442c0) (0xc002805040) Create stream
I0106 18:56:39.408977       6 log.go:172] (0xc001a442c0) (0xc002805040) Stream added, broadcasting: 5
I0106 18:56:39.410032       6 log.go:172] (0xc001a442c0) Reply frame received for 5
I0106 18:56:40.524972       6 log.go:172] (0xc001a442c0) Data frame received for 3
I0106 18:56:40.525027       6 log.go:172] (0xc0019a3a40) (3) Data frame handling
I0106 18:56:40.525053       6 log.go:172] (0xc0019a3a40) (3) Data frame sent
I0106 18:56:40.525868       6 log.go:172] (0xc001a442c0) Data frame received for 5
I0106 18:56:40.525896       6 log.go:172] (0xc002805040) (5) Data frame handling
I0106 18:56:40.525935       6 log.go:172] (0xc001a442c0) Data frame received for 3
I0106 18:56:40.525974       6 log.go:172] (0xc0019a3a40) (3) Data frame handling
I0106 18:56:40.527913       6 log.go:172] (0xc001a442c0) Data frame received for 1
I0106 18:56:40.527981       6 log.go:172] (0xc002804e60) (1) Data frame handling
I0106 18:56:40.528020       6 log.go:172] (0xc002804e60) (1) Data frame sent
I0106 18:56:40.528071       6 log.go:172] (0xc001a442c0) (0xc002804e60) Stream removed, broadcasting: 1
I0106 18:56:40.528108       6 log.go:172] (0xc001a442c0) Go away received
I0106 18:56:40.528310       6 log.go:172] (0xc001a442c0) (0xc002804e60) Stream removed, broadcasting: 1
I0106 18:56:40.528338       6 log.go:172] (0xc001a442c0) (0xc0019a3a40) Stream removed, broadcasting: 3
I0106 18:56:40.528351       6 log.go:172] (0xc001a442c0) (0xc002805040) Stream removed, broadcasting: 5
Jan  6 18:56:40.528: INFO: Found all expected endpoints: [netserver-0]
Jan  6 18:56:40.532: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.167 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-kv8rh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 18:56:40.532: INFO: >>> kubeConfig: /root/.kube/config
I0106 18:56:40.570522       6 log.go:172] (0xc000ad5550) (0xc000b29c20) Create stream
I0106 18:56:40.570549       6 log.go:172] (0xc000ad5550) (0xc000b29c20) Stream added, broadcasting: 1
I0106 18:56:40.573016       6 log.go:172] (0xc000ad5550) Reply frame received for 1
I0106 18:56:40.573083       6 log.go:172] (0xc000ad5550) (0xc000b29cc0) Create stream
I0106 18:56:40.573101       6 log.go:172] (0xc000ad5550) (0xc000b29cc0) Stream added, broadcasting: 3
I0106 18:56:40.574034       6 log.go:172] (0xc000ad5550) Reply frame received for 3
I0106 18:56:40.574065       6 log.go:172] (0xc000ad5550) (0xc000b29d60) Create stream
I0106 18:56:40.574073       6 log.go:172] (0xc000ad5550) (0xc000b29d60) Stream added, broadcasting: 5
I0106 18:56:40.574927       6 log.go:172] (0xc000ad5550) Reply frame received for 5
I0106 18:56:41.645952       6 log.go:172] (0xc000ad5550) Data frame received for 3
I0106 18:56:41.645978       6 log.go:172] (0xc000b29cc0) (3) Data frame handling
I0106 18:56:41.645986       6 log.go:172] (0xc000b29cc0) (3) Data frame sent
I0106 18:56:41.645990       6 log.go:172] (0xc000ad5550) Data frame received for 3
I0106 18:56:41.645994       6 log.go:172] (0xc000b29cc0) (3) Data frame handling
I0106 18:56:41.646474       6 log.go:172] (0xc000ad5550) Data frame received for 5
I0106 18:56:41.646545       6 log.go:172] (0xc000b29d60) (5) Data frame handling
I0106 18:56:41.647894       6 log.go:172] (0xc000ad5550) Data frame received for 1
I0106 18:56:41.647908       6 log.go:172] (0xc000b29c20) (1) Data frame handling
I0106 18:56:41.647921       6 log.go:172] (0xc000b29c20) (1) Data frame sent
I0106 18:56:41.647937       6 log.go:172] (0xc000ad5550) (0xc000b29c20) Stream removed, broadcasting: 1
I0106 18:56:41.648010       6 log.go:172] (0xc000ad5550) Go away received
I0106 18:56:41.648090       6 log.go:172] (0xc000ad5550) (0xc000b29c20) Stream removed, broadcasting: 1
I0106 18:56:41.648143       6 log.go:172] (0xc000ad5550) (0xc000b29cc0) Stream removed, broadcasting: 3
I0106 18:56:41.648156       6 log.go:172] (0xc000ad5550) (0xc000b29d60) Stream removed, broadcasting: 5
Jan  6 18:56:41.648: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:56:41.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-kv8rh" for this suite.
Jan  6 18:57:05.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:57:05.760: INFO: namespace: e2e-tests-pod-network-test-kv8rh, resource: bindings, ignored listing per whitelist
Jan  6 18:57:05.776: INFO: namespace e2e-tests-pod-network-test-kv8rh deletion completed in 24.106143115s

• [SLOW TEST:56.628 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:57:05.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-f85723ab-5050-11eb-8655-0242ac110009
STEP: Creating secret with name s-test-opt-upd-f857240f-5050-11eb-8655-0242ac110009
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-f85723ab-5050-11eb-8655-0242ac110009
STEP: Updating secret s-test-opt-upd-f857240f-5050-11eb-8655-0242ac110009
STEP: Creating secret with name s-test-opt-create-f8572436-5050-11eb-8655-0242ac110009
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:57:18.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jsp4m" for this suite.
Jan  6 18:57:42.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:57:42.131: INFO: namespace: e2e-tests-projected-jsp4m, resource: bindings, ignored listing per whitelist
Jan  6 18:57:42.135: INFO: namespace e2e-tests-projected-jsp4m deletion completed in 24.112876283s

• [SLOW TEST:36.358 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:57:42.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  6 18:57:42.230: INFO: Waiting up to 5m0s for pod "pod-0dfc3def-5051-11eb-8655-0242ac110009" in namespace "e2e-tests-emptydir-6678x" to be "success or failure"
Jan  6 18:57:42.234: INFO: Pod "pod-0dfc3def-5051-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052863ms
Jan  6 18:57:44.238: INFO: Pod "pod-0dfc3def-5051-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008128371s
Jan  6 18:57:46.243: INFO: Pod "pod-0dfc3def-5051-11eb-8655-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.012662846s
Jan  6 18:57:48.247: INFO: Pod "pod-0dfc3def-5051-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017183215s
STEP: Saw pod success
Jan  6 18:57:48.247: INFO: Pod "pod-0dfc3def-5051-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:57:48.251: INFO: Trying to get logs from node hunter-worker2 pod pod-0dfc3def-5051-11eb-8655-0242ac110009 container test-container: 
STEP: delete the pod
Jan  6 18:57:48.279: INFO: Waiting for pod pod-0dfc3def-5051-11eb-8655-0242ac110009 to disappear
Jan  6 18:57:48.294: INFO: Pod pod-0dfc3def-5051-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:57:48.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6678x" for this suite.
Jan  6 18:57:54.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:57:54.365: INFO: namespace: e2e-tests-emptydir-6678x, resource: bindings, ignored listing per whitelist
Jan  6 18:57:54.413: INFO: namespace e2e-tests-emptydir-6678x deletion completed in 6.115185095s

• [SLOW TEST:12.278 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  6 18:57:54.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-154dcd72-5051-11eb-8655-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  6 18:57:54.534: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1551f279-5051-11eb-8655-0242ac110009" in namespace "e2e-tests-projected-xr2kk" to be "success or failure"
Jan  6 18:57:54.550: INFO: Pod "pod-projected-configmaps-1551f279-5051-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.623864ms
Jan  6 18:57:56.554: INFO: Pod "pod-projected-configmaps-1551f279-5051-11eb-8655-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020618689s
Jan  6 18:57:58.558: INFO: Pod "pod-projected-configmaps-1551f279-5051-11eb-8655-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024329149s
STEP: Saw pod success
Jan  6 18:57:58.558: INFO: Pod "pod-projected-configmaps-1551f279-5051-11eb-8655-0242ac110009" satisfied condition "success or failure"
Jan  6 18:57:58.561: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-1551f279-5051-11eb-8655-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  6 18:57:58.580: INFO: Waiting for pod pod-projected-configmaps-1551f279-5051-11eb-8655-0242ac110009 to disappear
Jan  6 18:57:58.761: INFO: Pod pod-projected-configmaps-1551f279-5051-11eb-8655-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  6 18:57:58.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xr2kk" for this suite.
Jan  6 18:58:04.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 18:58:04.839: INFO: namespace: e2e-tests-projected-xr2kk, resource: bindings, ignored listing per whitelist
Jan  6 18:58:04.872: INFO: namespace e2e-tests-projected-xr2kk deletion completed in 6.107139374s

• [SLOW TEST:10.459 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSJan  6 18:58:04.873: INFO: Running AfterSuite actions on all nodes
Jan  6 18:58:04.873: INFO: Running AfterSuite actions on node 1
Jan  6 18:58:04.873: INFO: Skipping dumping logs from cluster

Ran 200 of 2164 Specs in 6153.754 seconds
SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS