I0113 17:18:36.461615 6 e2e.go:224] Starting e2e run "5ea38b89-55c3-11eb-8355-0242ac110009" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1610558315 - Will randomize all specs Will run 201 of 2164 specs Jan 13 17:18:36.635: INFO: >>> kubeConfig: /root/.kube/config Jan 13 17:18:36.638: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 13 17:18:36.649: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 13 17:18:36.686: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 13 17:18:36.686: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 13 17:18:36.686: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 13 17:18:36.695: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jan 13 17:18:36.696: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 13 17:18:36.696: INFO: e2e test version: v1.13.12 Jan 13 17:18:36.696: INFO: kube-apiserver version: v1.13.12 SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:18:36.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Jan 13 17:18:36.792: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-9z2fb Jan 13 17:18:40.808: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-9z2fb STEP: checking the pod's current state and verifying that restartCount is present Jan 13 17:18:40.811: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:22:40.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-9z2fb" for this suite. Jan 13 17:22:46.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:22:46.926: INFO: namespace: e2e-tests-container-probe-9z2fb, resource: bindings, ignored listing per whitelist Jan 13 17:22:46.974: INFO: namespace e2e-tests-container-probe-9z2fb deletion completed in 6.099731261s • [SLOW TEST:250.278 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:22:46.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-b926q A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-b926q;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-b926q A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-b926q;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-b926q.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-b926q.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-b926q.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-b926q.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-b926q.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-b926q.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-b926q.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-b926q.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-b926q.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-b926q.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-b926q.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 51.122.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.122.51_udp@PTR;check="$$(dig +tcp +noall +answer +search 51.122.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.122.51_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-b926q A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-b926q;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-b926q A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-b926q;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-b926q.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-b926q.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-b926q.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-b926q.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-b926q.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-b926q.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-b926q.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-b926q.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-b926q.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-b926q.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-b926q.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 51.122.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.122.51_udp@PTR;check="$$(dig +tcp +noall +answer +search 51.122.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.122.51_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 13 17:22:55.211: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:22:55.218: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-b926q from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:22:55.227: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:22:55.273: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:22:55.275: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:22:55.278: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-b926q from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:22:55.280: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-b926q from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:22:55.283: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:22:55.286: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:22:55.289: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:22:55.291: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:22:55.306: INFO: Lookups using e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009 failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-b926q wheezy_tcp@dns-test-service.e2e-tests-dns-b926q.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-b926q jessie_tcp@dns-test-service.e2e-tests-dns-b926q jessie_udp@dns-test-service.e2e-tests-dns-b926q.svc jessie_tcp@dns-test-service.e2e-tests-dns-b926q.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc] Jan 13 17:23:00.311: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:00.318: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-b926q from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:00.328: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:00.356: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:00.359: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:00.361: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-b926q from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:00.363: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-b926q from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:00.366: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:00.369: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:00.372: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:00.375: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:00.393: INFO: Lookups using e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009 failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-b926q wheezy_tcp@dns-test-service.e2e-tests-dns-b926q.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-b926q jessie_tcp@dns-test-service.e2e-tests-dns-b926q jessie_udp@dns-test-service.e2e-tests-dns-b926q.svc jessie_tcp@dns-test-service.e2e-tests-dns-b926q.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc] Jan 13 17:23:05.393: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:05.399: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-b926q from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:05.406: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:05.434: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:05.436: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:05.439: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-b926q from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:05.442: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-b926q from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:05.445: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:05.448: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:05.451: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:05.454: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:05.473: INFO: Lookups using e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009 failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-b926q wheezy_tcp@dns-test-service.e2e-tests-dns-b926q.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-b926q jessie_tcp@dns-test-service.e2e-tests-dns-b926q jessie_udp@dns-test-service.e2e-tests-dns-b926q.svc jessie_tcp@dns-test-service.e2e-tests-dns-b926q.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc] Jan 13 17:23:10.311: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:10.318: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-b926q from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:10.329: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:10.357: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:10.360: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:10.363: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-b926q from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:10.366: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-b926q from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:10.370: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:10.376: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:10.379: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:10.381: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:10.393: INFO: Lookups using e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009 failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-b926q wheezy_tcp@dns-test-service.e2e-tests-dns-b926q.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-b926q jessie_tcp@dns-test-service.e2e-tests-dns-b926q jessie_udp@dns-test-service.e2e-tests-dns-b926q.svc jessie_tcp@dns-test-service.e2e-tests-dns-b926q.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc] Jan 13 17:23:15.314: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:15.321: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-b926q from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:15.330: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:15.426: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:15.428: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:15.431: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-b926q from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:15.433: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-b926q from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:15.435: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:15.438: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:15.440: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:15.443: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:15.556: INFO: Lookups using e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009 failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-b926q wheezy_tcp@dns-test-service.e2e-tests-dns-b926q.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-b926q jessie_tcp@dns-test-service.e2e-tests-dns-b926q jessie_udp@dns-test-service.e2e-tests-dns-b926q.svc jessie_tcp@dns-test-service.e2e-tests-dns-b926q.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc] Jan 13 17:23:20.310: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:20.316: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-b926q from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:20.325: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:20.353: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:20.357: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:20.360: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-b926q from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:20.363: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-b926q from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:20.367: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:20.370: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:20.373: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:20.377: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc from pod e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009: the server could not find the requested resource (get pods dns-test-f45caf48-55c3-11eb-8355-0242ac110009) Jan 13 17:23:20.397: INFO: Lookups using e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009 failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-b926q wheezy_tcp@dns-test-service.e2e-tests-dns-b926q.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-b926q jessie_tcp@dns-test-service.e2e-tests-dns-b926q jessie_udp@dns-test-service.e2e-tests-dns-b926q.svc jessie_tcp@dns-test-service.e2e-tests-dns-b926q.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b926q.svc] Jan 13 17:23:25.417: INFO: DNS probes using e2e-tests-dns-b926q/dns-test-f45caf48-55c3-11eb-8355-0242ac110009 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:23:26.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-b926q" for this suite. Jan 13 17:23:32.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:23:32.518: INFO: namespace: e2e-tests-dns-b926q, resource: bindings, ignored listing per whitelist Jan 13 17:23:32.559: INFO: namespace e2e-tests-dns-b926q deletion completed in 6.104229701s • [SLOW TEST:45.584 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:23:32.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-0f87d451-55c4-11eb-8355-0242ac110009 STEP: Creating a pod to test consume secrets Jan 13 17:23:32.803: INFO: Waiting up to 5m0s for pod "pod-secrets-0f8b2c2c-55c4-11eb-8355-0242ac110009" in namespace "e2e-tests-secrets-xfnfg" to be "success or failure" Jan 13 17:23:32.810: INFO: Pod "pod-secrets-0f8b2c2c-55c4-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 7.014652ms Jan 13 17:23:34.814: INFO: Pod "pod-secrets-0f8b2c2c-55c4-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011439035s Jan 13 17:23:36.818: INFO: Pod "pod-secrets-0f8b2c2c-55c4-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014990155s STEP: Saw pod success Jan 13 17:23:36.818: INFO: Pod "pod-secrets-0f8b2c2c-55c4-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:23:36.820: INFO: Trying to get logs from node hunter-control-plane pod pod-secrets-0f8b2c2c-55c4-11eb-8355-0242ac110009 container secret-env-test: STEP: delete the pod Jan 13 17:23:36.861: INFO: Waiting for pod pod-secrets-0f8b2c2c-55c4-11eb-8355-0242ac110009 to disappear Jan 13 17:23:36.885: INFO: Pod pod-secrets-0f8b2c2c-55c4-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:23:36.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xfnfg" for this suite. Jan 13 17:23:42.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:23:42.932: INFO: namespace: e2e-tests-secrets-xfnfg, resource: bindings, ignored listing per whitelist Jan 13 17:23:42.999: INFO: namespace e2e-tests-secrets-xfnfg deletion completed in 6.110666145s • [SLOW TEST:10.440 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:23:42.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-15b3382f-55c4-11eb-8355-0242ac110009 STEP: Creating a pod to test consume secrets Jan 13 17:23:43.148: INFO: Waiting up to 5m0s for pod "pod-secrets-15ba8b4d-55c4-11eb-8355-0242ac110009" in namespace "e2e-tests-secrets-44q6w" to be "success or failure" Jan 13 17:23:43.178: INFO: Pod "pod-secrets-15ba8b4d-55c4-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 29.381482ms Jan 13 17:23:45.182: INFO: Pod "pod-secrets-15ba8b4d-55c4-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033510646s Jan 13 17:23:47.186: INFO: Pod "pod-secrets-15ba8b4d-55c4-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037330553s STEP: Saw pod success Jan 13 17:23:47.186: INFO: Pod "pod-secrets-15ba8b4d-55c4-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:23:47.188: INFO: Trying to get logs from node hunter-control-plane pod pod-secrets-15ba8b4d-55c4-11eb-8355-0242ac110009 container secret-volume-test: STEP: delete the pod Jan 13 17:23:47.206: INFO: Waiting for pod pod-secrets-15ba8b4d-55c4-11eb-8355-0242ac110009 to disappear Jan 13 17:23:47.211: INFO: Pod pod-secrets-15ba8b4d-55c4-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:23:47.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-44q6w" for this suite. Jan 13 17:23:53.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:23:53.314: INFO: namespace: e2e-tests-secrets-44q6w, resource: bindings, ignored listing per whitelist Jan 13 17:23:53.373: INFO: namespace e2e-tests-secrets-44q6w deletion completed in 6.15811876s • [SLOW TEST:10.374 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:23:53.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 13 17:23:53.525: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 13 17:23:53.551: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 13 17:23:58.554: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 13 17:23:58.554: INFO: Creating deployment "test-rolling-update-deployment" Jan 13 17:23:58.557: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 13 17:23:58.585: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 13 17:24:00.614: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 13 17:24:00.617: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746155438, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746155438, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746155438, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746155438, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 17:24:02.661: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746155438, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746155438, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746155442, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746155438, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 17:24:04.927: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 13 17:24:04.996: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-9v4lx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9v4lx/deployments/test-rolling-update-deployment,UID:1eea3e7b-55c4-11eb-9c75-0242ac12000b,ResourceVersion:482230,Generation:1,CreationTimestamp:2021-01-13 17:23:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2021-01-13 17:23:58 +0000 UTC 2021-01-13 17:23:58 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2021-01-13 17:24:04 +0000 UTC 2021-01-13 17:23:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 13 17:24:05.008: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-9v4lx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9v4lx/replicasets/test-rolling-update-deployment-75db98fb4c,UID:1eef8ca1-55c4-11eb-9c75-0242ac12000b,ResourceVersion:482215,Generation:1,CreationTimestamp:2021-01-13 17:23:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 1eea3e7b-55c4-11eb-9c75-0242ac12000b 0xc001a75f97 0xc001a75f98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 13 17:24:05.008: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 13 17:24:05.008: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-9v4lx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9v4lx/replicasets/test-rolling-update-controller,UID:1beaedee-55c4-11eb-9c75-0242ac12000b,ResourceVersion:482229,Generation:2,CreationTimestamp:2021-01-13 17:23:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 1eea3e7b-55c4-11eb-9c75-0242ac12000b 0xc001a75ed7 0xc001a75ed8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 13 17:24:05.011: INFO: Pod "test-rolling-update-deployment-75db98fb4c-4lgnl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-4lgnl,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-9v4lx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9v4lx/pods/test-rolling-update-deployment-75db98fb4c-4lgnl,UID:1ef482f0-55c4-11eb-9c75-0242ac12000b,ResourceVersion:482214,Generation:0,CreationTimestamp:2021-01-13 17:23:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 1eef8ca1-55c4-11eb-9c75-0242ac12000b 0xc0016370d7 0xc0016370d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pcmlx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pcmlx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-pcmlx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001637150} {node.kubernetes.io/unreachable Exists NoExecute 0xc001637170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:23:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:24:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:24:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:23:58 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.0.229,StartTime:2021-01-13 17:23:58 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2021-01-13 17:24:01 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://02a32a3d5f9ccdaa18ce1358fa8c0cfcff8085fdba8ad2e1efff99fafc71ae2f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:24:05.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-9v4lx" for this suite. Jan 13 17:24:13.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:24:13.140: INFO: namespace: e2e-tests-deployment-9v4lx, resource: bindings, ignored listing per whitelist Jan 13 17:24:13.179: INFO: namespace e2e-tests-deployment-9v4lx deletion completed in 8.164487997s • [SLOW TEST:19.806 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:24:13.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-27bb0aed-55c4-11eb-8355-0242ac110009 STEP: Creating configMap with name cm-test-opt-upd-27bb0b75-55c4-11eb-8355-0242ac110009 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-27bb0aed-55c4-11eb-8355-0242ac110009 STEP: Updating configmap cm-test-opt-upd-27bb0b75-55c4-11eb-8355-0242ac110009 STEP: Creating configMap with name cm-test-opt-create-27bb0bab-55c4-11eb-8355-0242ac110009 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:25:29.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-pxqvj" for this suite. Jan 13 17:25:51.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:25:51.958: INFO: namespace: e2e-tests-configmap-pxqvj, resource: bindings, ignored listing per whitelist Jan 13 17:25:52.030: INFO: namespace e2e-tests-configmap-pxqvj deletion completed in 22.108829816s • [SLOW TEST:98.851 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:25:52.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 13 17:25:52.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-df8c9' Jan 13 17:25:56.206: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 13 17:25:56.206: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Jan 13 17:25:56.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-df8c9' Jan 13 17:25:56.354: INFO: stderr: "" Jan 13 17:25:56.354: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:25:56.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-df8c9" for this suite. Jan 13 17:26:18.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:26:18.411: INFO: namespace: e2e-tests-kubectl-df8c9, resource: bindings, ignored listing per whitelist Jan 13 17:26:18.476: INFO: namespace e2e-tests-kubectl-df8c9 deletion completed in 22.119013624s • [SLOW TEST:26.446 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:26:18.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 13 17:26:18.613: INFO: Waiting up to 5m0s for pod "downwardapi-volume-726044e5-55c4-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-8xckj" to be "success or failure" Jan 13 17:26:18.632: INFO: Pod "downwardapi-volume-726044e5-55c4-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 18.968603ms Jan 13 17:26:20.636: INFO: Pod "downwardapi-volume-726044e5-55c4-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023142528s Jan 13 17:26:22.665: INFO: Pod "downwardapi-volume-726044e5-55c4-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052171367s Jan 13 17:26:24.669: INFO: Pod "downwardapi-volume-726044e5-55c4-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056111577s STEP: Saw pod success Jan 13 17:26:24.669: INFO: Pod "downwardapi-volume-726044e5-55c4-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:26:24.671: INFO: Trying to get logs from node hunter-control-plane pod downwardapi-volume-726044e5-55c4-11eb-8355-0242ac110009 container client-container: STEP: delete the pod Jan 13 17:26:24.703: INFO: Waiting for pod downwardapi-volume-726044e5-55c4-11eb-8355-0242ac110009 to disappear Jan 13 17:26:24.716: INFO: Pod downwardapi-volume-726044e5-55c4-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:26:24.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8xckj" for this suite. Jan 13 17:26:30.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:26:30.768: INFO: namespace: e2e-tests-projected-8xckj, resource: bindings, ignored listing per whitelist Jan 13 17:26:30.841: INFO: namespace e2e-tests-projected-8xckj deletion completed in 6.121168393s • [SLOW TEST:12.365 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:26:30.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Jan 13 17:26:30.964: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:26:31.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-d8ndm" for this suite. Jan 13 17:26:37.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:26:37.162: INFO: namespace: e2e-tests-kubectl-d8ndm, resource: bindings, ignored listing per whitelist Jan 13 17:26:37.208: INFO: namespace e2e-tests-kubectl-d8ndm deletion completed in 6.129672786s • [SLOW TEST:6.367 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:26:37.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 13 17:26:37.313: INFO: Creating deployment "nginx-deployment" Jan 13 17:26:37.334: INFO: Waiting for observed generation 1 Jan 13 17:26:39.777: INFO: Waiting for all required pods to come up Jan 13 17:26:39.960: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 13 17:26:54.014: INFO: Waiting for deployment "nginx-deployment" to complete Jan 13 17:26:54.018: INFO: Updating deployment "nginx-deployment" with a non-existent image Jan 13 17:26:54.022: INFO: Updating deployment nginx-deployment Jan 13 17:26:54.022: INFO: Waiting for observed generation 2 Jan 13 17:26:58.141: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 13 17:26:58.145: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 13 17:26:58.798: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 13 17:26:59.792: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 13 17:26:59.792: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 13 17:26:59.795: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 13 17:26:59.851: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jan 13 17:26:59.851: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jan 13 17:26:59.856: INFO: Updating deployment nginx-deployment Jan 13 17:26:59.856: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jan 13 17:27:00.678: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 13 17:27:02.774: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 13 17:27:02.790: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-9q728,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9q728/deployments/nginx-deployment,UID:7d8afec4-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483067,Generation:3,CreationTimestamp:2021-01-13 17:26:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2021-01-13 17:26:59 +0000 UTC 2021-01-13 17:26:59 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2021-01-13 17:27:02 +0000 UTC 2021-01-13 17:26:37 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Jan 13 17:27:02.826: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-9q728,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9q728/replicasets/nginx-deployment-5c98f8fb5,UID:8780a839-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483059,Generation:3,CreationTimestamp:2021-01-13 17:26:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7d8afec4-55c4-11eb-9c75-0242ac12000b 0xc00001c287 0xc00001c288}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 13 17:27:02.826: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jan 13 17:27:02.827: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-9q728,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9q728/replicasets/nginx-deployment-85ddf47c5d,UID:7d8f2d2a-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483057,Generation:3,CreationTimestamp:2021-01-13 17:26:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7d8afec4-55c4-11eb-9c75-0242ac12000b 0xc00001c3f7 0xc00001c3f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jan 13 17:27:02.863: INFO: Pod "nginx-deployment-5c98f8fb5-244d2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-244d2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-5c98f8fb5-244d2,UID:8b7ffff1-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483017,Generation:0,CreationTimestamp:2021-01-13 17:27:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 8780a839-55c4-11eb-9c75-0242ac12000b 0xc00001dd47 0xc00001dd48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00001de50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00001de80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.863: INFO: Pod "nginx-deployment-5c98f8fb5-2jvl9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2jvl9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-5c98f8fb5-2jvl9,UID:87866cca-55c4-11eb-9c75-0242ac12000b,ResourceVersion:482981,Generation:0,CreationTimestamp:2021-01-13 17:26:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 8780a839-55c4-11eb-9c75-0242ac12000b 0xc00001df70 0xc00001df71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00022c0b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00022c0e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:54 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2021-01-13 17:26:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.864: INFO: Pod "nginx-deployment-5c98f8fb5-6d9wd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6d9wd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-5c98f8fb5-6d9wd,UID:8bcc2872-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483045,Generation:0,CreationTimestamp:2021-01-13 17:27:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 8780a839-55c4-11eb-9c75-0242ac12000b 0xc00022c2b0 0xc00022c2b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00022c790} {node.kubernetes.io/unreachable Exists NoExecute 0xc00022d000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.864: INFO: Pod "nginx-deployment-5c98f8fb5-7t4ff" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7t4ff,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-5c98f8fb5-7t4ff,UID:8b9b4f46-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483020,Generation:0,CreationTimestamp:2021-01-13 17:27:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 8780a839-55c4-11eb-9c75-0242ac12000b 0xc00022da40 0xc00022da41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000058c60} {node.kubernetes.io/unreachable Exists NoExecute 0xc000058c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.864: INFO: Pod "nginx-deployment-5c98f8fb5-82mjl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-82mjl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-5c98f8fb5-82mjl,UID:8bcbfb37-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483039,Generation:0,CreationTimestamp:2021-01-13 17:27:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 8780a839-55c4-11eb-9c75-0242ac12000b 0xc000058e20 0xc000058e21}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000059000} {node.kubernetes.io/unreachable Exists NoExecute 0xc000059060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.864: INFO: Pod "nginx-deployment-5c98f8fb5-d9ltj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-d9ltj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-5c98f8fb5-d9ltj,UID:8bcc2a91-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483047,Generation:0,CreationTimestamp:2021-01-13 17:27:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 8780a839-55c4-11eb-9c75-0242ac12000b 0xc0000591b0 0xc0000591b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000059ef0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001636090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.864: INFO: Pod "nginx-deployment-5c98f8fb5-drztd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-drztd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-5c98f8fb5-drztd,UID:8b9b6305-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483025,Generation:0,CreationTimestamp:2021-01-13 17:27:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 8780a839-55c4-11eb-9c75-0242ac12000b 0xc0016361f0 0xc0016361f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001636260} {node.kubernetes.io/unreachable Exists NoExecute 0xc001636450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.864: INFO: Pod "nginx-deployment-5c98f8fb5-hlf9h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hlf9h,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-5c98f8fb5-hlf9h,UID:87ab0275-55c4-11eb-9c75-0242ac12000b,ResourceVersion:482996,Generation:0,CreationTimestamp:2021-01-13 17:26:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 8780a839-55c4-11eb-9c75-0242ac12000b 0xc001636560 0xc001636561}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016365f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001636620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:54 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2021-01-13 17:26:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.864: INFO: Pod "nginx-deployment-5c98f8fb5-hmnsx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hmnsx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-5c98f8fb5-hmnsx,UID:87a94ffd-55c4-11eb-9c75-0242ac12000b,ResourceVersion:482987,Generation:0,CreationTimestamp:2021-01-13 17:26:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 8780a839-55c4-11eb-9c75-0242ac12000b 0xc0016366f0 0xc0016366f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001636810} {node.kubernetes.io/unreachable Exists NoExecute 0xc001636830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:54 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2021-01-13 17:26:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.864: INFO: Pod "nginx-deployment-5c98f8fb5-j8jkk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-j8jkk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-5c98f8fb5-j8jkk,UID:8786712a-55c4-11eb-9c75-0242ac12000b,ResourceVersion:482984,Generation:0,CreationTimestamp:2021-01-13 17:26:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 8780a839-55c4-11eb-9c75-0242ac12000b 0xc001636970 0xc001636971}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001636a30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001636a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:54 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2021-01-13 17:26:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.865: INFO: Pod "nginx-deployment-5c98f8fb5-l2h5h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-l2h5h,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-5c98f8fb5-l2h5h,UID:8be664b3-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483050,Generation:0,CreationTimestamp:2021-01-13 17:27:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 8780a839-55c4-11eb-9c75-0242ac12000b 0xc001636b40 0xc001636b41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001636bc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001636be0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.865: INFO: Pod "nginx-deployment-5c98f8fb5-mc4fn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mc4fn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-5c98f8fb5-mc4fn,UID:87839119-55c4-11eb-9c75-0242ac12000b,ResourceVersion:482975,Generation:0,CreationTimestamp:2021-01-13 17:26:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 8780a839-55c4-11eb-9c75-0242ac12000b 0xc001636c80 0xc001636c81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001636cf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001636d10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:54 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2021-01-13 17:26:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.865: INFO: Pod "nginx-deployment-5c98f8fb5-n7nf8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-n7nf8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-5c98f8fb5-n7nf8,UID:8bcc354c-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483042,Generation:0,CreationTimestamp:2021-01-13 17:27:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 8780a839-55c4-11eb-9c75-0242ac12000b 0xc001636df0 0xc001636df1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001636e90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001636eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.865: INFO: Pod "nginx-deployment-85ddf47c5d-2t67t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2t67t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-85ddf47c5d-2t67t,UID:8b7facfa-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483058,Generation:0,CreationTimestamp:2021-01-13 17:27:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7d8f2d2a-55c4-11eb-9c75-0242ac12000b 0xc001636f30 0xc001636f31}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001636fa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001636fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:00 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2021-01-13 17:27:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.865: INFO: Pod "nginx-deployment-85ddf47c5d-4shln" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4shln,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-85ddf47c5d-4shln,UID:7d9433da-55c4-11eb-9c75-0242ac12000b,ResourceVersion:482858,Generation:0,CreationTimestamp:2021-01-13 17:26:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7d8f2d2a-55c4-11eb-9c75-0242ac12000b 0xc001637090 0xc001637091}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001637110} {node.kubernetes.io/unreachable Exists NoExecute 0xc001637130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:37 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.0.237,StartTime:2021-01-13 17:26:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-13 17:26:44 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f7b2638771f077d3be6b7d20eda7678a6b31f8987591bc14f0d4cbec94f8bedf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.865: INFO: Pod "nginx-deployment-85ddf47c5d-5m952" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5m952,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-85ddf47c5d-5m952,UID:8bcbe7bf-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483038,Generation:0,CreationTimestamp:2021-01-13 17:27:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7d8f2d2a-55c4-11eb-9c75-0242ac12000b 0xc001637200 0xc001637201}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001637260} {node.kubernetes.io/unreachable Exists NoExecute 0xc001637280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.865: INFO: Pod "nginx-deployment-85ddf47c5d-d7nb5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-d7nb5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-85ddf47c5d-d7nb5,UID:8bcc229e-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483043,Generation:0,CreationTimestamp:2021-01-13 17:27:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7d8f2d2a-55c4-11eb-9c75-0242ac12000b 0xc001637310 0xc001637311}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001637370} {node.kubernetes.io/unreachable Exists NoExecute 0xc001637390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.865: INFO: Pod "nginx-deployment-85ddf47c5d-dj4jd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dj4jd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-85ddf47c5d-dj4jd,UID:8b9b570b-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483033,Generation:0,CreationTimestamp:2021-01-13 17:27:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7d8f2d2a-55c4-11eb-9c75-0242ac12000b 0xc001637410 0xc001637411}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001637470} {node.kubernetes.io/unreachable Exists NoExecute 0xc001637490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.865: INFO: Pod "nginx-deployment-85ddf47c5d-dswz7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dswz7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-85ddf47c5d-dswz7,UID:8b7c8377-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483051,Generation:0,CreationTimestamp:2021-01-13 17:27:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7d8f2d2a-55c4-11eb-9c75-0242ac12000b 0xc001637520 0xc001637521}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001637580} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016375a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:00 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2021-01-13 17:27:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.866: INFO: Pod "nginx-deployment-85ddf47c5d-jdv8t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jdv8t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-85ddf47c5d-jdv8t,UID:8b9b5c4e-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483021,Generation:0,CreationTimestamp:2021-01-13 17:27:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7d8f2d2a-55c4-11eb-9c75-0242ac12000b 0xc001637660 0xc001637661}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016376c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016376e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.866: INFO: Pod "nginx-deployment-85ddf47c5d-kcbl4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kcbl4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-85ddf47c5d-kcbl4,UID:8b7fbefc-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483069,Generation:0,CreationTimestamp:2021-01-13 17:27:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7d8f2d2a-55c4-11eb-9c75-0242ac12000b 0xc001637770 0xc001637771}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016377d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016377f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:00 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2021-01-13 17:27:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.866: INFO: Pod "nginx-deployment-85ddf47c5d-ldd7b" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ldd7b,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-85ddf47c5d-ldd7b,UID:7d941310-55c4-11eb-9c75-0242ac12000b,ResourceVersion:482896,Generation:0,CreationTimestamp:2021-01-13 17:26:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7d8f2d2a-55c4-11eb-9c75-0242ac12000b 0xc0016378b0 0xc0016378b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001637910} {node.kubernetes.io/unreachable Exists NoExecute 0xc001637930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:37 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.0.240,StartTime:2021-01-13 17:26:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-13 17:26:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://260e9a9844ceccbc52c0c7f822ebbd6d146e5adc36f5de2f9dba679bd5fbd7ef}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.866: INFO: Pod "nginx-deployment-85ddf47c5d-nc5zg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nc5zg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-85ddf47c5d-nc5zg,UID:8b9b7a08-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483027,Generation:0,CreationTimestamp:2021-01-13 17:27:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7d8f2d2a-55c4-11eb-9c75-0242ac12000b 0xc001637a50 0xc001637a51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001637ab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001637ad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.866: INFO: Pod "nginx-deployment-85ddf47c5d-p2q5s" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-p2q5s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-85ddf47c5d-p2q5s,UID:7d9d2a40-55c4-11eb-9c75-0242ac12000b,ResourceVersion:482910,Generation:0,CreationTimestamp:2021-01-13 17:26:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7d8f2d2a-55c4-11eb-9c75-0242ac12000b 0xc001637b40 0xc001637b41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001637bc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001637be0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:37 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.0.244,StartTime:2021-01-13 17:26:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-13 17:26:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9182912c898d5181edc9842484dea8c430ca5ee385750f9cf785ec418d389e00}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.866: INFO: Pod "nginx-deployment-85ddf47c5d-pcd2f" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pcd2f,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-85ddf47c5d-pcd2f,UID:7d9fcd16-55c4-11eb-9c75-0242ac12000b,ResourceVersion:482891,Generation:0,CreationTimestamp:2021-01-13 17:26:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7d8f2d2a-55c4-11eb-9c75-0242ac12000b 0xc001637cd0 0xc001637cd1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001637d50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001637d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:37 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.0.242,StartTime:2021-01-13 17:26:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-13 17:26:49 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://accc538075373667b465e10cb74b2436fae52404446608f3df97ede58923d6d8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.866: INFO: Pod "nginx-deployment-85ddf47c5d-pz9mh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pz9mh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-85ddf47c5d-pz9mh,UID:7d93a9bf-55c4-11eb-9c75-0242ac12000b,ResourceVersion:482848,Generation:0,CreationTimestamp:2021-01-13 17:26:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7d8f2d2a-55c4-11eb-9c75-0242ac12000b 0xc001637e70 0xc001637e71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001637ed0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001637ef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:37 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.0.236,StartTime:2021-01-13 17:26:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-13 17:26:41 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e286d2e3d7c5af20849da4da6f0332fd8569cad45ff829cdf488abaa2956f242}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.866: INFO: Pod "nginx-deployment-85ddf47c5d-qj25r" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qj25r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-85ddf47c5d-qj25r,UID:7d9cef40-55c4-11eb-9c75-0242ac12000b,ResourceVersion:482874,Generation:0,CreationTimestamp:2021-01-13 17:26:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7d8f2d2a-55c4-11eb-9c75-0242ac12000b 0xc001637fd0 0xc001637fd1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00101e030} {node.kubernetes.io/unreachable Exists NoExecute 0xc00101e050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:37 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.0.239,StartTime:2021-01-13 17:26:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-13 17:26:47 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f8f7d108611d09d3026e31a2ef78f03d782a0a896124eda4b40965377a0e1453}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.867: INFO: Pod "nginx-deployment-85ddf47c5d-sm5lz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sm5lz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-85ddf47c5d-sm5lz,UID:7d9cc302-55c4-11eb-9c75-0242ac12000b,ResourceVersion:482870,Generation:0,CreationTimestamp:2021-01-13 17:26:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7d8f2d2a-55c4-11eb-9c75-0242ac12000b 0xc00101e200 0xc00101e201}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00101e270} {node.kubernetes.io/unreachable Exists NoExecute 0xc00101e290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:37 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.0.238,StartTime:2021-01-13 17:26:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-13 17:26:47 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1f2c6294aa9724f3df48d5de419b9ef6146f464b2a216d99b581890e1c74abec}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.867: INFO: Pod "nginx-deployment-85ddf47c5d-t8nq6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t8nq6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-85ddf47c5d-t8nq6,UID:8b9b7ad0-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483035,Generation:0,CreationTimestamp:2021-01-13 17:27:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7d8f2d2a-55c4-11eb-9c75-0242ac12000b 0xc00101e6e0 0xc00101e6e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00101e760} {node.kubernetes.io/unreachable Exists NoExecute 0xc00101e7a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.867: INFO: Pod "nginx-deployment-85ddf47c5d-v6s44" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-v6s44,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-85ddf47c5d-v6s44,UID:8bcbd68d-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483037,Generation:0,CreationTimestamp:2021-01-13 17:27:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7d8f2d2a-55c4-11eb-9c75-0242ac12000b 0xc00101e830 0xc00101e831}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00101e8c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00101e8e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.867: INFO: Pod "nginx-deployment-85ddf47c5d-vp4nd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vp4nd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-85ddf47c5d-vp4nd,UID:7d9d215d-55c4-11eb-9c75-0242ac12000b,ResourceVersion:482884,Generation:0,CreationTimestamp:2021-01-13 17:26:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7d8f2d2a-55c4-11eb-9c75-0242ac12000b 0xc00101e970 0xc00101e971}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00101e9e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00101ea00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:26:37 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.0.241,StartTime:2021-01-13 17:26:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-13 17:26:49 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://76305ea248c92837f0f0e1317ce2088f1567fa2cf0cacf6898723cd710b6d87e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.867: INFO: Pod "nginx-deployment-85ddf47c5d-wxntr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wxntr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-85ddf47c5d-wxntr,UID:8bcc16c4-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483046,Generation:0,CreationTimestamp:2021-01-13 17:27:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7d8f2d2a-55c4-11eb-9c75-0242ac12000b 0xc00101eb00 0xc00101eb01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00101eb90} {node.kubernetes.io/unreachable Exists NoExecute 0xc00101ebb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 13 17:27:02.867: INFO: Pod "nginx-deployment-85ddf47c5d-ztjmp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ztjmp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9q728,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9q728/pods/nginx-deployment-85ddf47c5d-ztjmp,UID:8bcc264f-55c4-11eb-9c75-0242ac12000b,ResourceVersion:483040,Generation:0,CreationTimestamp:2021-01-13 17:27:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7d8f2d2a-55c4-11eb-9c75-0242ac12000b 0xc00101ec50 0xc00101ec51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4p5p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4p5p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-k4p5p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00101ecc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00101ece0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:27:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:27:02.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-9q728" for this suite. Jan 13 17:27:33.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:27:33.213: INFO: namespace: e2e-tests-deployment-9q728, resource: bindings, ignored listing per whitelist Jan 13 17:27:33.215: INFO: namespace e2e-tests-deployment-9q728 deletion completed in 30.279667179s • [SLOW TEST:56.007 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:27:33.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 13 17:27:38.566: INFO: Successfully updated pod "labelsupdate9f46e578-55c4-11eb-8355-0242ac110009" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:27:42.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2hvlx" for this suite. Jan 13 17:28:06.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:28:06.713: INFO: namespace: e2e-tests-projected-2hvlx, resource: bindings, ignored listing per whitelist Jan 13 17:28:06.746: INFO: namespace e2e-tests-projected-2hvlx deletion completed in 24.106460293s • [SLOW TEST:33.531 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:28:06.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-l5tqw STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 13 17:28:06.837: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 13 17:28:25.257: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.0.27:8080/dial?request=hostName&protocol=http&host=10.244.0.26&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-l5tqw PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 13 17:28:25.257: INFO: >>> kubeConfig: /root/.kube/config I0113 17:28:25.285234 6 log.go:172] (0xc0011fa210) (0xc001dc60a0) Create stream I0113 17:28:25.285260 6 log.go:172] (0xc0011fa210) (0xc001dc60a0) Stream added, broadcasting: 1 I0113 17:28:25.287523 6 log.go:172] (0xc0011fa210) Reply frame received for 1 I0113 17:28:25.287567 6 log.go:172] (0xc0011fa210) (0xc000c00000) Create stream I0113 17:28:25.287585 6 log.go:172] (0xc0011fa210) (0xc000c00000) Stream added, broadcasting: 3 I0113 17:28:25.288436 6 log.go:172] (0xc0011fa210) Reply frame received for 3 I0113 17:28:25.288475 6 log.go:172] (0xc0011fa210) (0xc00179c000) Create stream I0113 17:28:25.288489 6 log.go:172] (0xc0011fa210) (0xc00179c000) Stream added, broadcasting: 5 I0113 17:28:25.289590 6 log.go:172] (0xc0011fa210) Reply frame received for 5 I0113 17:28:25.641165 6 log.go:172] (0xc0011fa210) Data frame received for 3 I0113 17:28:25.641192 6 log.go:172] (0xc000c00000) (3) Data frame handling I0113 17:28:25.641207 6 log.go:172] (0xc000c00000) (3) Data frame sent I0113 17:28:25.641963 6 log.go:172] (0xc0011fa210) Data frame received for 3 I0113 17:28:25.641997 6 log.go:172] (0xc000c00000) (3) Data frame handling I0113 17:28:25.642071 6 log.go:172] (0xc0011fa210) Data frame received for 5 I0113 17:28:25.642094 6 log.go:172] (0xc00179c000) (5) Data frame handling I0113 17:28:25.643943 6 log.go:172] (0xc0011fa210) Data frame received for 1 I0113 17:28:25.643979 6 log.go:172] (0xc001dc60a0) (1) Data frame handling I0113 17:28:25.643989 6 log.go:172] (0xc001dc60a0) (1) Data frame sent I0113 17:28:25.643998 6 log.go:172] (0xc0011fa210) (0xc001dc60a0) Stream removed, broadcasting: 1 I0113 17:28:25.644008 6 log.go:172] (0xc0011fa210) Go away received I0113 17:28:25.644249 6 log.go:172] (0xc0011fa210) (0xc001dc60a0) Stream removed, broadcasting: 1 I0113 17:28:25.644274 6 log.go:172] (0xc0011fa210) (0xc000c00000) Stream removed, broadcasting: 3 I0113 17:28:25.644290 6 log.go:172] (0xc0011fa210) (0xc00179c000) Stream removed, broadcasting: 5 Jan 13 17:28:25.644: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:28:25.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-l5tqw" for this suite. Jan 13 17:28:51.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:28:51.702: INFO: namespace: e2e-tests-pod-network-test-l5tqw, resource: bindings, ignored listing per whitelist Jan 13 17:28:51.785: INFO: namespace e2e-tests-pod-network-test-l5tqw deletion completed in 26.138138451s • [SLOW TEST:45.039 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:28:51.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Jan 13 17:28:58.225: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:29:23.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-65zxq" for this suite. Jan 13 17:29:29.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:29:29.571: INFO: namespace: e2e-tests-namespaces-65zxq, resource: bindings, ignored listing per whitelist Jan 13 17:29:29.582: INFO: namespace e2e-tests-namespaces-65zxq deletion completed in 6.099811819s STEP: Destroying namespace "e2e-tests-nsdeletetest-2rvjd" for this suite. Jan 13 17:29:29.585: INFO: Namespace e2e-tests-nsdeletetest-2rvjd was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-b8s7g" for this suite. Jan 13 17:29:35.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:29:35.610: INFO: namespace: e2e-tests-nsdeletetest-b8s7g, resource: bindings, ignored listing per whitelist Jan 13 17:29:35.693: INFO: namespace e2e-tests-nsdeletetest-b8s7g deletion completed in 6.10868404s • [SLOW TEST:43.908 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:29:35.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 13 17:29:35.845: INFO: Waiting up to 5m0s for pod "downward-api-e7f20f88-55c4-11eb-8355-0242ac110009" in namespace "e2e-tests-downward-api-9crv9" to be "success or failure" Jan 13 17:29:35.848: INFO: Pod "downward-api-e7f20f88-55c4-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.576807ms Jan 13 17:29:37.853: INFO: Pod "downward-api-e7f20f88-55c4-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008182519s Jan 13 17:29:39.857: INFO: Pod "downward-api-e7f20f88-55c4-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012463319s Jan 13 17:29:41.861: INFO: Pod "downward-api-e7f20f88-55c4-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016773008s STEP: Saw pod success Jan 13 17:29:41.861: INFO: Pod "downward-api-e7f20f88-55c4-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:29:41.865: INFO: Trying to get logs from node hunter-control-plane pod downward-api-e7f20f88-55c4-11eb-8355-0242ac110009 container dapi-container: STEP: delete the pod Jan 13 17:29:41.916: INFO: Waiting for pod downward-api-e7f20f88-55c4-11eb-8355-0242ac110009 to disappear Jan 13 17:29:41.989: INFO: Pod downward-api-e7f20f88-55c4-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:29:41.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9crv9" for this suite. Jan 13 17:29:48.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:29:48.142: INFO: namespace: e2e-tests-downward-api-9crv9, resource: bindings, ignored listing per whitelist Jan 13 17:29:48.152: INFO: namespace e2e-tests-downward-api-9crv9 deletion completed in 6.159830384s • [SLOW TEST:12.458 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:29:48.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jan 13 17:29:48.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7x6lr' Jan 13 17:29:48.591: INFO: stderr: "" Jan 13 17:29:48.591: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 13 17:29:49.596: INFO: Selector matched 1 pods for map[app:redis] Jan 13 17:29:49.596: INFO: Found 0 / 1 Jan 13 17:29:50.596: INFO: Selector matched 1 pods for map[app:redis] Jan 13 17:29:50.596: INFO: Found 0 / 1 Jan 13 17:29:51.596: INFO: Selector matched 1 pods for map[app:redis] Jan 13 17:29:51.596: INFO: Found 0 / 1 Jan 13 17:29:52.596: INFO: Selector matched 1 pods for map[app:redis] Jan 13 17:29:52.596: INFO: Found 1 / 1 Jan 13 17:29:52.597: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 13 17:29:52.600: INFO: Selector matched 1 pods for map[app:redis] Jan 13 17:29:52.600: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 13 17:29:52.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-h2d2z --namespace=e2e-tests-kubectl-7x6lr -p {"metadata":{"annotations":{"x":"y"}}}' Jan 13 17:29:52.699: INFO: stderr: "" Jan 13 17:29:52.699: INFO: stdout: "pod/redis-master-h2d2z patched\n" STEP: checking annotations Jan 13 17:29:52.704: INFO: Selector matched 1 pods for map[app:redis] Jan 13 17:29:52.704: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:29:52.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7x6lr" for this suite. Jan 13 17:30:16.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:30:16.860: INFO: namespace: e2e-tests-kubectl-7x6lr, resource: bindings, ignored listing per whitelist Jan 13 17:30:16.862: INFO: namespace e2e-tests-kubectl-7x6lr deletion completed in 24.155238813s • [SLOW TEST:28.710 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:30:16.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 13 17:30:16.970: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0075ad57-55c5-11eb-8355-0242ac110009" in namespace "e2e-tests-downward-api-zffnt" to be "success or failure" Jan 13 17:30:16.975: INFO: Pod "downwardapi-volume-0075ad57-55c5-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059672ms Jan 13 17:30:19.075: INFO: Pod "downwardapi-volume-0075ad57-55c5-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104558975s Jan 13 17:30:21.118: INFO: Pod "downwardapi-volume-0075ad57-55c5-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147868644s Jan 13 17:30:23.123: INFO: Pod "downwardapi-volume-0075ad57-55c5-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.152287778s STEP: Saw pod success Jan 13 17:30:23.123: INFO: Pod "downwardapi-volume-0075ad57-55c5-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:30:23.126: INFO: Trying to get logs from node hunter-control-plane pod downwardapi-volume-0075ad57-55c5-11eb-8355-0242ac110009 container client-container: STEP: delete the pod Jan 13 17:30:23.173: INFO: Waiting for pod downwardapi-volume-0075ad57-55c5-11eb-8355-0242ac110009 to disappear Jan 13 17:30:23.178: INFO: Pod downwardapi-volume-0075ad57-55c5-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:30:23.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-zffnt" for this suite. Jan 13 17:30:29.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:30:29.273: INFO: namespace: e2e-tests-downward-api-zffnt, resource: bindings, ignored listing per whitelist Jan 13 17:30:29.303: INFO: namespace e2e-tests-downward-api-zffnt deletion completed in 6.12201825s • [SLOW TEST:12.441 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:30:29.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-m6j78 Jan 13 17:30:33.573: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-m6j78 STEP: checking the pod's current state and verifying that restartCount is present Jan 13 17:30:33.576: INFO: Initial restart count of pod liveness-http is 0 Jan 13 17:30:45.604: INFO: Restart count of pod e2e-tests-container-probe-m6j78/liveness-http is now 1 (12.027850424s elapsed) Jan 13 17:31:05.653: INFO: Restart count of pod e2e-tests-container-probe-m6j78/liveness-http is now 2 (32.077386926s elapsed) Jan 13 17:31:25.925: INFO: Restart count of pod e2e-tests-container-probe-m6j78/liveness-http is now 3 (52.349380494s elapsed) Jan 13 17:31:46.426: INFO: Restart count of pod e2e-tests-container-probe-m6j78/liveness-http is now 4 (1m12.849685124s elapsed) Jan 13 17:32:56.578: INFO: Restart count of pod e2e-tests-container-probe-m6j78/liveness-http is now 5 (2m23.00215172s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:32:56.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-m6j78" for this suite. Jan 13 17:33:02.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:33:02.749: INFO: namespace: e2e-tests-container-probe-m6j78, resource: bindings, ignored listing per whitelist Jan 13 17:33:02.753: INFO: namespace e2e-tests-container-probe-m6j78 deletion completed in 6.105913485s • [SLOW TEST:153.450 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:33:02.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-8rc58 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 13 17:33:02.866: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 13 17:33:29.011: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.0.44:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-8rc58 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 13 17:33:29.011: INFO: >>> kubeConfig: /root/.kube/config I0113 17:33:29.042416 6 log.go:172] (0xc000365130) (0xc001171680) Create stream I0113 17:33:29.042458 6 log.go:172] (0xc000365130) (0xc001171680) Stream added, broadcasting: 1 I0113 17:33:29.044233 6 log.go:172] (0xc000365130) Reply frame received for 1 I0113 17:33:29.044277 6 log.go:172] (0xc000365130) (0xc001dc6d20) Create stream I0113 17:33:29.044290 6 log.go:172] (0xc000365130) (0xc001dc6d20) Stream added, broadcasting: 3 I0113 17:33:29.045218 6 log.go:172] (0xc000365130) Reply frame received for 3 I0113 17:33:29.045247 6 log.go:172] (0xc000365130) (0xc0018cb680) Create stream I0113 17:33:29.045259 6 log.go:172] (0xc000365130) (0xc0018cb680) Stream added, broadcasting: 5 I0113 17:33:29.046132 6 log.go:172] (0xc000365130) Reply frame received for 5 I0113 17:33:29.113698 6 log.go:172] (0xc000365130) Data frame received for 3 I0113 17:33:29.113750 6 log.go:172] (0xc001dc6d20) (3) Data frame handling I0113 17:33:29.113774 6 log.go:172] (0xc001dc6d20) (3) Data frame sent I0113 17:33:29.113788 6 log.go:172] (0xc000365130) Data frame received for 3 I0113 17:33:29.113797 6 log.go:172] (0xc001dc6d20) (3) Data frame handling I0113 17:33:29.113941 6 log.go:172] (0xc000365130) Data frame received for 5 I0113 17:33:29.113965 6 log.go:172] (0xc0018cb680) (5) Data frame handling I0113 17:33:29.116130 6 log.go:172] (0xc000365130) Data frame received for 1 I0113 17:33:29.116153 6 log.go:172] (0xc001171680) (1) Data frame handling I0113 17:33:29.116166 6 log.go:172] (0xc001171680) (1) Data frame sent I0113 17:33:29.116195 6 log.go:172] (0xc000365130) (0xc001171680) Stream removed, broadcasting: 1 I0113 17:33:29.116212 6 log.go:172] (0xc000365130) Go away received I0113 17:33:29.116326 6 log.go:172] (0xc000365130) (0xc001171680) Stream removed, broadcasting: 1 I0113 17:33:29.116359 6 log.go:172] (0xc000365130) (0xc001dc6d20) Stream removed, broadcasting: 3 I0113 17:33:29.116378 6 log.go:172] (0xc000365130) (0xc0018cb680) Stream removed, broadcasting: 5 Jan 13 17:33:29.116: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:33:29.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-8rc58" for this suite. Jan 13 17:33:53.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:33:53.197: INFO: namespace: e2e-tests-pod-network-test-8rc58, resource: bindings, ignored listing per whitelist Jan 13 17:33:53.234: INFO: namespace e2e-tests-pod-network-test-8rc58 deletion completed in 24.114751935s • [SLOW TEST:50.481 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:33:53.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:33:53.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-pg6h5" for this suite. Jan 13 17:34:15.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:34:15.461: INFO: namespace: e2e-tests-pods-pg6h5, resource: bindings, ignored listing per whitelist Jan 13 17:34:15.520: INFO: namespace e2e-tests-pods-pg6h5 deletion completed in 22.158958111s • [SLOW TEST:22.285 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:34:15.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-dpxdf/configmap-test-8eb9c925-55c5-11eb-8355-0242ac110009 STEP: Creating a pod to test consume configMaps Jan 13 17:34:15.663: INFO: Waiting up to 5m0s for pod "pod-configmaps-8ebc409b-55c5-11eb-8355-0242ac110009" in namespace "e2e-tests-configmap-dpxdf" to be "success or failure" Jan 13 17:34:15.667: INFO: Pod "pod-configmaps-8ebc409b-55c5-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.612141ms Jan 13 17:34:17.671: INFO: Pod "pod-configmaps-8ebc409b-55c5-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0076677s Jan 13 17:34:19.675: INFO: Pod "pod-configmaps-8ebc409b-55c5-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01222701s STEP: Saw pod success Jan 13 17:34:19.675: INFO: Pod "pod-configmaps-8ebc409b-55c5-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:34:19.679: INFO: Trying to get logs from node hunter-control-plane pod pod-configmaps-8ebc409b-55c5-11eb-8355-0242ac110009 container env-test: STEP: delete the pod Jan 13 17:34:19.740: INFO: Waiting for pod pod-configmaps-8ebc409b-55c5-11eb-8355-0242ac110009 to disappear Jan 13 17:34:19.751: INFO: Pod pod-configmaps-8ebc409b-55c5-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:34:19.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-dpxdf" for this suite. Jan 13 17:34:25.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:34:25.809: INFO: namespace: e2e-tests-configmap-dpxdf, resource: bindings, ignored listing per whitelist Jan 13 17:34:25.853: INFO: namespace e2e-tests-configmap-dpxdf deletion completed in 6.099767981s • [SLOW TEST:10.333 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:34:25.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-xch6c Jan 13 17:34:29.983: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-xch6c STEP: checking the pod's current state and verifying that restartCount is present Jan 13 17:34:29.987: INFO: Initial restart count of pod liveness-http is 0 Jan 13 17:34:48.026: INFO: Restart count of pod e2e-tests-container-probe-xch6c/liveness-http is now 1 (18.039474411s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:34:48.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-xch6c" for this suite. Jan 13 17:34:54.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:34:54.136: INFO: namespace: e2e-tests-container-probe-xch6c, resource: bindings, ignored listing per whitelist Jan 13 17:34:54.221: INFO: namespace e2e-tests-container-probe-xch6c deletion completed in 6.107509087s • [SLOW TEST:28.367 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:34:54.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Jan 13 17:34:54.882: INFO: Waiting up to 5m0s for pod "pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-8vtps" in namespace "e2e-tests-svcaccounts-8xrnn" to be "success or failure" Jan 13 17:34:54.908: INFO: Pod "pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-8vtps": Phase="Pending", Reason="", readiness=false. Elapsed: 26.727254ms Jan 13 17:34:56.912: INFO: Pod "pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-8vtps": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02998104s Jan 13 17:34:58.916: INFO: Pod "pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-8vtps": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033756952s Jan 13 17:35:00.920: INFO: Pod "pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-8vtps": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037909227s Jan 13 17:35:02.924: INFO: Pod "pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-8vtps": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041867039s STEP: Saw pod success Jan 13 17:35:02.924: INFO: Pod "pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-8vtps" satisfied condition "success or failure" Jan 13 17:35:02.926: INFO: Trying to get logs from node hunter-control-plane pod pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-8vtps container token-test: STEP: delete the pod Jan 13 17:35:02.979: INFO: Waiting for pod pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-8vtps to disappear Jan 13 17:35:02.991: INFO: Pod pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-8vtps no longer exists STEP: Creating a pod to test consume service account root CA Jan 13 17:35:02.994: INFO: Waiting up to 5m0s for pod "pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-xbsdv" in namespace "e2e-tests-svcaccounts-8xrnn" to be "success or failure" Jan 13 17:35:03.009: INFO: Pod "pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-xbsdv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.852082ms Jan 13 17:35:05.013: INFO: Pod "pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-xbsdv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018827658s Jan 13 17:35:07.297: INFO: Pod "pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-xbsdv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.302200963s Jan 13 17:35:09.332: INFO: Pod "pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-xbsdv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.337565184s STEP: Saw pod success Jan 13 17:35:09.332: INFO: Pod "pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-xbsdv" satisfied condition "success or failure" Jan 13 17:35:09.335: INFO: Trying to get logs from node hunter-control-plane pod pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-xbsdv container root-ca-test: STEP: delete the pod Jan 13 17:35:09.379: INFO: Waiting for pod pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-xbsdv to disappear Jan 13 17:35:09.401: INFO: Pod pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-xbsdv no longer exists STEP: Creating a pod to test consume service account namespace Jan 13 17:35:09.405: INFO: Waiting up to 5m0s for pod "pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-2z6dp" in namespace "e2e-tests-svcaccounts-8xrnn" to be "success or failure" Jan 13 17:35:09.610: INFO: Pod "pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-2z6dp": Phase="Pending", Reason="", readiness=false. Elapsed: 205.348722ms Jan 13 17:35:11.615: INFO: Pod "pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-2z6dp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209965679s Jan 13 17:35:13.619: INFO: Pod "pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-2z6dp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214373767s Jan 13 17:35:15.624: INFO: Pod "pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-2z6dp": Phase="Running", Reason="", readiness=false. Elapsed: 6.218786407s Jan 13 17:35:17.628: INFO: Pod "pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-2z6dp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.223255283s STEP: Saw pod success Jan 13 17:35:17.628: INFO: Pod "pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-2z6dp" satisfied condition "success or failure" Jan 13 17:35:17.631: INFO: Trying to get logs from node hunter-control-plane pod pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-2z6dp container namespace-test: STEP: delete the pod Jan 13 17:35:17.671: INFO: Waiting for pod pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-2z6dp to disappear Jan 13 17:35:17.682: INFO: Pod pod-service-account-a61cb24f-55c5-11eb-8355-0242ac110009-2z6dp no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:35:17.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-8xrnn" for this suite. Jan 13 17:35:23.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:35:23.741: INFO: namespace: e2e-tests-svcaccounts-8xrnn, resource: bindings, ignored listing per whitelist Jan 13 17:35:23.811: INFO: namespace e2e-tests-svcaccounts-8xrnn deletion completed in 6.125577599s • [SLOW TEST:29.589 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:35:23.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 13 17:35:44.029: INFO: Container started at 2021-01-13 17:35:26 +0000 UTC, pod became ready at 2021-01-13 17:35:43 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:35:44.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-h5skk" for this suite. Jan 13 17:36:06.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:36:06.117: INFO: namespace: e2e-tests-container-probe-h5skk, resource: bindings, ignored listing per whitelist Jan 13 17:36:06.165: INFO: namespace e2e-tests-container-probe-h5skk deletion completed in 22.133228186s • [SLOW TEST:42.354 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:36:06.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 13 17:36:06.273: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0a7ee27-55c5-11eb-8355-0242ac110009" in namespace "e2e-tests-downward-api-2bj7r" to be "success or failure" Jan 13 17:36:06.321: INFO: Pod "downwardapi-volume-d0a7ee27-55c5-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 47.645947ms Jan 13 17:36:08.325: INFO: Pod "downwardapi-volume-d0a7ee27-55c5-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051426666s Jan 13 17:36:10.329: INFO: Pod "downwardapi-volume-d0a7ee27-55c5-11eb-8355-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.055640671s Jan 13 17:36:12.333: INFO: Pod "downwardapi-volume-d0a7ee27-55c5-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059582917s STEP: Saw pod success Jan 13 17:36:12.333: INFO: Pod "downwardapi-volume-d0a7ee27-55c5-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:36:12.335: INFO: Trying to get logs from node hunter-control-plane pod downwardapi-volume-d0a7ee27-55c5-11eb-8355-0242ac110009 container client-container: STEP: delete the pod Jan 13 17:36:12.359: INFO: Waiting for pod downwardapi-volume-d0a7ee27-55c5-11eb-8355-0242ac110009 to disappear Jan 13 17:36:12.364: INFO: Pod downwardapi-volume-d0a7ee27-55c5-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:36:12.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2bj7r" for this suite. Jan 13 17:36:18.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:36:18.485: INFO: namespace: e2e-tests-downward-api-2bj7r, resource: bindings, ignored listing per whitelist Jan 13 17:36:18.528: INFO: namespace e2e-tests-downward-api-2bj7r deletion completed in 6.160488668s • [SLOW TEST:12.362 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:36:18.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Jan 13 17:36:18.652: INFO: Waiting up to 5m0s for pod "var-expansion-d8074613-55c5-11eb-8355-0242ac110009" in namespace "e2e-tests-var-expansion-8zzqk" to be "success or failure" Jan 13 17:36:18.658: INFO: Pod "var-expansion-d8074613-55c5-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.162046ms Jan 13 17:36:20.662: INFO: Pod "var-expansion-d8074613-55c5-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010371856s Jan 13 17:36:22.666: INFO: Pod "var-expansion-d8074613-55c5-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013908265s STEP: Saw pod success Jan 13 17:36:22.666: INFO: Pod "var-expansion-d8074613-55c5-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:36:22.668: INFO: Trying to get logs from node hunter-control-plane pod var-expansion-d8074613-55c5-11eb-8355-0242ac110009 container dapi-container: STEP: delete the pod Jan 13 17:36:22.774: INFO: Waiting for pod var-expansion-d8074613-55c5-11eb-8355-0242ac110009 to disappear Jan 13 17:36:22.789: INFO: Pod var-expansion-d8074613-55c5-11eb-8355-0242ac110009 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:36:22.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-8zzqk" for this suite. Jan 13 17:36:28.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:36:28.938: INFO: namespace: e2e-tests-var-expansion-8zzqk, resource: bindings, ignored listing per whitelist Jan 13 17:36:28.941: INFO: namespace e2e-tests-var-expansion-8zzqk deletion completed in 6.148755178s • [SLOW TEST:10.413 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:36:28.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-54sdx/secret-test-de42b186-55c5-11eb-8355-0242ac110009 STEP: Creating a pod to test consume secrets Jan 13 17:36:29.114: INFO: Waiting up to 5m0s for pod "pod-configmaps-de46efcf-55c5-11eb-8355-0242ac110009" in namespace "e2e-tests-secrets-54sdx" to be "success or failure" Jan 13 17:36:29.159: INFO: Pod "pod-configmaps-de46efcf-55c5-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 45.300209ms Jan 13 17:36:31.164: INFO: Pod "pod-configmaps-de46efcf-55c5-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05049047s Jan 13 17:36:33.167: INFO: Pod "pod-configmaps-de46efcf-55c5-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053717028s STEP: Saw pod success Jan 13 17:36:33.168: INFO: Pod "pod-configmaps-de46efcf-55c5-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:36:33.169: INFO: Trying to get logs from node hunter-control-plane pod pod-configmaps-de46efcf-55c5-11eb-8355-0242ac110009 container env-test: STEP: delete the pod Jan 13 17:36:33.204: INFO: Waiting for pod pod-configmaps-de46efcf-55c5-11eb-8355-0242ac110009 to disappear Jan 13 17:36:33.213: INFO: Pod pod-configmaps-de46efcf-55c5-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:36:33.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-54sdx" for this suite. Jan 13 17:36:39.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:36:39.241: INFO: namespace: e2e-tests-secrets-54sdx, resource: bindings, ignored listing per whitelist Jan 13 17:36:39.411: INFO: namespace e2e-tests-secrets-54sdx deletion completed in 6.19257514s • [SLOW TEST:10.470 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:36:39.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 13 17:36:44.057: INFO: Successfully updated pod "labelsupdatee47a64a0-55c5-11eb-8355-0242ac110009" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:36:46.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-47mt9" for this suite. Jan 13 17:37:08.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:37:08.231: INFO: namespace: e2e-tests-downward-api-47mt9, resource: bindings, ignored listing per whitelist Jan 13 17:37:08.244: INFO: namespace e2e-tests-downward-api-47mt9 deletion completed in 22.129393311s • [SLOW TEST:28.833 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:37:08.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 13 17:37:08.431: INFO: Waiting up to 5m0s for pod "pod-f5b6e7c1-55c5-11eb-8355-0242ac110009" in namespace "e2e-tests-emptydir-ngjfq" to be "success or failure" Jan 13 17:37:08.440: INFO: Pod "pod-f5b6e7c1-55c5-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 8.252731ms Jan 13 17:37:10.444: INFO: Pod "pod-f5b6e7c1-55c5-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012385497s Jan 13 17:37:12.447: INFO: Pod "pod-f5b6e7c1-55c5-11eb-8355-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.016044964s Jan 13 17:37:14.451: INFO: Pod "pod-f5b6e7c1-55c5-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019895312s STEP: Saw pod success Jan 13 17:37:14.451: INFO: Pod "pod-f5b6e7c1-55c5-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:37:14.454: INFO: Trying to get logs from node hunter-control-plane pod pod-f5b6e7c1-55c5-11eb-8355-0242ac110009 container test-container: STEP: delete the pod Jan 13 17:37:14.494: INFO: Waiting for pod pod-f5b6e7c1-55c5-11eb-8355-0242ac110009 to disappear Jan 13 17:37:14.506: INFO: Pod pod-f5b6e7c1-55c5-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:37:14.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ngjfq" for this suite. Jan 13 17:37:20.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:37:20.558: INFO: namespace: e2e-tests-emptydir-ngjfq, resource: bindings, ignored listing per whitelist Jan 13 17:37:20.618: INFO: namespace e2e-tests-emptydir-ngjfq deletion completed in 6.109655933s • [SLOW TEST:12.373 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:37:20.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 13 17:37:20.739: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 13 17:37:25.744: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 13 17:37:25.744: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 13 17:37:25.772: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-cn29q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cn29q/deployments/test-cleanup-deployment,UID:000a62b9-55c6-11eb-9c75-0242ac12000b,ResourceVersion:485497,Generation:1,CreationTimestamp:2021-01-13 17:37:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jan 13 17:37:25.809: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Jan 13 17:37:25.809: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 13 17:37:25.809: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-cn29q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cn29q/replicasets/test-cleanup-controller,UID:fd0aa746-55c5-11eb-9c75-0242ac12000b,ResourceVersion:485498,Generation:1,CreationTimestamp:2021-01-13 17:37:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 000a62b9-55c6-11eb-9c75-0242ac12000b 0xc002246817 0xc002246818}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 13 17:37:25.823: INFO: Pod "test-cleanup-controller-29cvb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-29cvb,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-cn29q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cn29q/pods/test-cleanup-controller-29cvb,UID:fd0e5732-55c5-11eb-9c75-0242ac12000b,ResourceVersion:485491,Generation:0,CreationTimestamp:2021-01-13 17:37:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller fd0aa746-55c5-11eb-9c75-0242ac12000b 0xc0008b0797 0xc0008b0798}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2s8gg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2s8gg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2s8gg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0008b0880} {node.kubernetes.io/unreachable Exists NoExecute 0xc0008b08a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:37:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:37:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:37:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:37:20 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.0.58,StartTime:2021-01-13 17:37:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-13 17:37:23 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c843237a258ccd2593eece947d314281cb8d80f7db5cf33db4ebcdb2c662de84}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:37:25.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-cn29q" for this suite. Jan 13 17:37:31.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:37:32.047: INFO: namespace: e2e-tests-deployment-cn29q, resource: bindings, ignored listing per whitelist Jan 13 17:37:32.089: INFO: namespace e2e-tests-deployment-cn29q deletion completed in 6.228905814s • [SLOW TEST:11.472 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:37:32.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 13 17:37:32.232: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 13 17:37:32.272: INFO: Number of nodes with available pods: 0 Jan 13 17:37:32.272: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 17:37:33.279: INFO: Number of nodes with available pods: 0 Jan 13 17:37:33.279: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 17:37:34.317: INFO: Number of nodes with available pods: 0 Jan 13 17:37:34.317: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 17:37:35.280: INFO: Number of nodes with available pods: 0 Jan 13 17:37:35.280: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 17:37:36.279: INFO: Number of nodes with available pods: 1 Jan 13 17:37:36.279: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 13 17:37:36.309: INFO: Wrong image for pod: daemon-set-d2rp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 13 17:37:37.334: INFO: Wrong image for pod: daemon-set-d2rp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 13 17:37:38.333: INFO: Wrong image for pod: daemon-set-d2rp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 13 17:37:39.334: INFO: Wrong image for pod: daemon-set-d2rp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 13 17:37:40.335: INFO: Wrong image for pod: daemon-set-d2rp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 13 17:37:40.335: INFO: Pod daemon-set-d2rp4 is not available Jan 13 17:37:41.334: INFO: Wrong image for pod: daemon-set-d2rp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 13 17:37:41.334: INFO: Pod daemon-set-d2rp4 is not available Jan 13 17:37:42.333: INFO: Wrong image for pod: daemon-set-d2rp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 13 17:37:42.333: INFO: Pod daemon-set-d2rp4 is not available Jan 13 17:37:43.334: INFO: Wrong image for pod: daemon-set-d2rp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 13 17:37:43.334: INFO: Pod daemon-set-d2rp4 is not available Jan 13 17:37:44.425: INFO: Wrong image for pod: daemon-set-d2rp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 13 17:37:44.426: INFO: Pod daemon-set-d2rp4 is not available Jan 13 17:37:45.334: INFO: Wrong image for pod: daemon-set-d2rp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 13 17:37:45.334: INFO: Pod daemon-set-d2rp4 is not available Jan 13 17:37:46.338: INFO: Wrong image for pod: daemon-set-d2rp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 13 17:37:46.338: INFO: Pod daemon-set-d2rp4 is not available Jan 13 17:37:47.335: INFO: Wrong image for pod: daemon-set-d2rp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 13 17:37:47.335: INFO: Pod daemon-set-d2rp4 is not available Jan 13 17:37:48.333: INFO: Wrong image for pod: daemon-set-d2rp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 13 17:37:48.333: INFO: Pod daemon-set-d2rp4 is not available Jan 13 17:37:49.334: INFO: Pod daemon-set-jk9hv is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 13 17:37:49.345: INFO: Number of nodes with available pods: 0 Jan 13 17:37:49.345: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 17:37:50.352: INFO: Number of nodes with available pods: 0 Jan 13 17:37:50.352: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 17:37:51.353: INFO: Number of nodes with available pods: 0 Jan 13 17:37:51.353: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 17:37:52.923: INFO: Number of nodes with available pods: 0 Jan 13 17:37:52.923: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 17:37:53.351: INFO: Number of nodes with available pods: 1 Jan 13 17:37:53.351: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-c8mln, will wait for the garbage collector to delete the pods Jan 13 17:37:53.420: INFO: Deleting DaemonSet.extensions daemon-set took: 5.924314ms Jan 13 17:37:53.520: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.308916ms Jan 13 17:38:09.124: INFO: Number of nodes with available pods: 0 Jan 13 17:38:09.124: INFO: Number of running nodes: 0, number of available pods: 0 Jan 13 17:38:09.127: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-c8mln/daemonsets","resourceVersion":"485659"},"items":null} Jan 13 17:38:09.129: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-c8mln/pods","resourceVersion":"485659"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:38:09.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-c8mln" for this suite. Jan 13 17:38:15.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:38:15.213: INFO: namespace: e2e-tests-daemonsets-c8mln, resource: bindings, ignored listing per whitelist Jan 13 17:38:15.279: INFO: namespace e2e-tests-daemonsets-c8mln deletion completed in 6.139683981s • [SLOW TEST:43.190 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:38:15.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-qcnwl [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Jan 13 17:38:15.436: INFO: Found 0 stateful pods, waiting for 3 Jan 13 17:38:25.441: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 13 17:38:25.441: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 13 17:38:25.441: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 13 17:38:35.441: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 13 17:38:35.441: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 13 17:38:35.441: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 13 17:38:35.466: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 13 17:38:45.558: INFO: Updating stateful set ss2 Jan 13 17:38:45.589: INFO: Waiting for Pod e2e-tests-statefulset-qcnwl/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 13 17:38:55.597: INFO: Waiting for Pod e2e-tests-statefulset-qcnwl/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jan 13 17:39:05.772: INFO: Found 2 stateful pods, waiting for 3 Jan 13 17:39:15.776: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 13 17:39:15.776: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 13 17:39:15.776: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 13 17:39:15.857: INFO: Updating stateful set ss2 Jan 13 17:39:15.871: INFO: Waiting for Pod e2e-tests-statefulset-qcnwl/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 13 17:39:25.879: INFO: Waiting for Pod e2e-tests-statefulset-qcnwl/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 13 17:39:35.915: INFO: Updating stateful set ss2 Jan 13 17:39:35.943: INFO: Waiting for StatefulSet e2e-tests-statefulset-qcnwl/ss2 to complete update Jan 13 17:39:35.943: INFO: Waiting for Pod e2e-tests-statefulset-qcnwl/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 13 17:39:45.951: INFO: Waiting for StatefulSet e2e-tests-statefulset-qcnwl/ss2 to complete update Jan 13 17:39:45.951: INFO: Waiting for Pod e2e-tests-statefulset-qcnwl/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 13 17:39:55.952: INFO: Deleting all statefulset in ns e2e-tests-statefulset-qcnwl Jan 13 17:39:55.955: INFO: Scaling statefulset ss2 to 0 Jan 13 17:40:25.971: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 17:40:25.974: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:40:25.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-qcnwl" for this suite. Jan 13 17:40:32.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:40:32.106: INFO: namespace: e2e-tests-statefulset-qcnwl, resource: bindings, ignored listing per whitelist Jan 13 17:40:32.114: INFO: namespace e2e-tests-statefulset-qcnwl deletion completed in 6.116597133s • [SLOW TEST:136.834 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:40:32.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Jan 13 17:40:32.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 13 17:40:34.793: INFO: stderr: "" Jan 13 17:40:34.793: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:40701\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:40701/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:40:34.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-v6ll5" for this suite. Jan 13 17:40:40.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:40:40.827: INFO: namespace: e2e-tests-kubectl-v6ll5, resource: bindings, ignored listing per whitelist Jan 13 17:40:40.910: INFO: namespace e2e-tests-kubectl-v6ll5 deletion completed in 6.113389228s • [SLOW TEST:8.796 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:40:40.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:40:45.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-t96nw" for this suite. Jan 13 17:41:25.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:41:25.117: INFO: namespace: e2e-tests-kubelet-test-t96nw, resource: bindings, ignored listing per whitelist Jan 13 17:41:25.214: INFO: namespace e2e-tests-kubelet-test-t96nw deletion completed in 40.156328564s • [SLOW TEST:44.304 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:41:25.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:41:34.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-zrc7r" for this suite. Jan 13 17:41:56.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:41:56.475: INFO: namespace: e2e-tests-replication-controller-zrc7r, resource: bindings, ignored listing per whitelist Jan 13 17:41:56.521: INFO: namespace e2e-tests-replication-controller-zrc7r deletion completed in 22.101831083s • [SLOW TEST:31.307 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:41:56.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-a185f148-55c6-11eb-8355-0242ac110009 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:42:02.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-k97pv" for this suite. Jan 13 17:42:20.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:42:20.862: INFO: namespace: e2e-tests-configmap-k97pv, resource: bindings, ignored listing per whitelist Jan 13 17:42:20.867: INFO: namespace e2e-tests-configmap-k97pv deletion completed in 18.143799904s • [SLOW TEST:24.346 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:42:20.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 13 17:42:20.983: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b0014950-55c6-11eb-8355-0242ac110009" in namespace "e2e-tests-downward-api-xgnn4" to be "success or failure" Jan 13 17:42:21.003: INFO: Pod "downwardapi-volume-b0014950-55c6-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 20.063795ms Jan 13 17:42:23.007: INFO: Pod "downwardapi-volume-b0014950-55c6-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024230714s Jan 13 17:42:25.012: INFO: Pod "downwardapi-volume-b0014950-55c6-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028468433s STEP: Saw pod success Jan 13 17:42:25.012: INFO: Pod "downwardapi-volume-b0014950-55c6-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:42:25.015: INFO: Trying to get logs from node hunter-control-plane pod downwardapi-volume-b0014950-55c6-11eb-8355-0242ac110009 container client-container: STEP: delete the pod Jan 13 17:42:25.053: INFO: Waiting for pod downwardapi-volume-b0014950-55c6-11eb-8355-0242ac110009 to disappear Jan 13 17:42:25.065: INFO: Pod downwardapi-volume-b0014950-55c6-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:42:25.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xgnn4" for this suite. Jan 13 17:42:31.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:42:31.179: INFO: namespace: e2e-tests-downward-api-xgnn4, resource: bindings, ignored listing per whitelist Jan 13 17:42:31.197: INFO: namespace e2e-tests-downward-api-xgnn4 deletion completed in 6.110354057s • [SLOW TEST:10.330 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:42:31.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Jan 13 17:42:31.313: INFO: Waiting up to 5m0s for pod "client-containers-b6290cf9-55c6-11eb-8355-0242ac110009" in namespace "e2e-tests-containers-knqdq" to be "success or failure" Jan 13 17:42:31.317: INFO: Pod "client-containers-b6290cf9-55c6-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.715382ms Jan 13 17:42:33.320: INFO: Pod "client-containers-b6290cf9-55c6-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007337418s Jan 13 17:42:35.325: INFO: Pod "client-containers-b6290cf9-55c6-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012055763s STEP: Saw pod success Jan 13 17:42:35.325: INFO: Pod "client-containers-b6290cf9-55c6-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:42:35.328: INFO: Trying to get logs from node hunter-control-plane pod client-containers-b6290cf9-55c6-11eb-8355-0242ac110009 container test-container: STEP: delete the pod Jan 13 17:42:35.347: INFO: Waiting for pod client-containers-b6290cf9-55c6-11eb-8355-0242ac110009 to disappear Jan 13 17:42:35.353: INFO: Pod client-containers-b6290cf9-55c6-11eb-8355-0242ac110009 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:42:35.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-knqdq" for this suite. Jan 13 17:42:41.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:42:41.430: INFO: namespace: e2e-tests-containers-knqdq, resource: bindings, ignored listing per whitelist Jan 13 17:42:41.495: INFO: namespace e2e-tests-containers-knqdq deletion completed in 6.119805974s • [SLOW TEST:10.297 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:42:41.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 13 17:42:41.664: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-x22lf,SelfLink:/api/v1/namespaces/e2e-tests-watch-x22lf/configmaps/e2e-watch-test-resource-version,UID:bc4e17d5-55c6-11eb-9c75-0242ac12000b,ResourceVersion:486552,Generation:0,CreationTimestamp:2021-01-13 17:42:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 13 17:42:41.664: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-x22lf,SelfLink:/api/v1/namespaces/e2e-tests-watch-x22lf/configmaps/e2e-watch-test-resource-version,UID:bc4e17d5-55c6-11eb-9c75-0242ac12000b,ResourceVersion:486553,Generation:0,CreationTimestamp:2021-01-13 17:42:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:42:41.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-x22lf" for this suite. Jan 13 17:42:47.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:42:47.762: INFO: namespace: e2e-tests-watch-x22lf, resource: bindings, ignored listing per whitelist Jan 13 17:42:47.775: INFO: namespace e2e-tests-watch-x22lf deletion completed in 6.098470301s • [SLOW TEST:6.280 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:42:47.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-9p4dd/configmap-test-c00eb7af-55c6-11eb-8355-0242ac110009 STEP: Creating a pod to test consume configMaps Jan 13 17:42:47.930: INFO: Waiting up to 5m0s for pod "pod-configmaps-c012136f-55c6-11eb-8355-0242ac110009" in namespace "e2e-tests-configmap-9p4dd" to be "success or failure" Jan 13 17:42:47.934: INFO: Pod "pod-configmaps-c012136f-55c6-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.309828ms Jan 13 17:42:49.938: INFO: Pod "pod-configmaps-c012136f-55c6-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00718314s Jan 13 17:42:51.942: INFO: Pod "pod-configmaps-c012136f-55c6-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011430031s STEP: Saw pod success Jan 13 17:42:51.942: INFO: Pod "pod-configmaps-c012136f-55c6-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:42:51.945: INFO: Trying to get logs from node hunter-control-plane pod pod-configmaps-c012136f-55c6-11eb-8355-0242ac110009 container env-test: STEP: delete the pod Jan 13 17:42:52.020: INFO: Waiting for pod pod-configmaps-c012136f-55c6-11eb-8355-0242ac110009 to disappear Jan 13 17:42:52.066: INFO: Pod pod-configmaps-c012136f-55c6-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:42:52.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-9p4dd" for this suite. Jan 13 17:42:58.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:42:58.115: INFO: namespace: e2e-tests-configmap-9p4dd, resource: bindings, ignored listing per whitelist Jan 13 17:42:58.180: INFO: namespace e2e-tests-configmap-9p4dd deletion completed in 6.110531466s • [SLOW TEST:10.404 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:42:58.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-n2qb STEP: Creating a pod to test atomic-volume-subpath Jan 13 17:42:58.396: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-n2qb" in namespace "e2e-tests-subpath-nmjkc" to be "success or failure" Jan 13 17:42:58.420: INFO: Pod "pod-subpath-test-secret-n2qb": Phase="Pending", Reason="", readiness=false. Elapsed: 23.473197ms Jan 13 17:43:00.441: INFO: Pod "pod-subpath-test-secret-n2qb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044734967s Jan 13 17:43:02.455: INFO: Pod "pod-subpath-test-secret-n2qb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059238648s Jan 13 17:43:04.459: INFO: Pod "pod-subpath-test-secret-n2qb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063122734s Jan 13 17:43:06.463: INFO: Pod "pod-subpath-test-secret-n2qb": Phase="Running", Reason="", readiness=false. Elapsed: 8.067335592s Jan 13 17:43:08.468: INFO: Pod "pod-subpath-test-secret-n2qb": Phase="Running", Reason="", readiness=false. Elapsed: 10.071547732s Jan 13 17:43:10.471: INFO: Pod "pod-subpath-test-secret-n2qb": Phase="Running", Reason="", readiness=false. Elapsed: 12.075181028s Jan 13 17:43:12.475: INFO: Pod "pod-subpath-test-secret-n2qb": Phase="Running", Reason="", readiness=false. Elapsed: 14.079075955s Jan 13 17:43:14.480: INFO: Pod "pod-subpath-test-secret-n2qb": Phase="Running", Reason="", readiness=false. Elapsed: 16.083972651s Jan 13 17:43:16.484: INFO: Pod "pod-subpath-test-secret-n2qb": Phase="Running", Reason="", readiness=false. Elapsed: 18.087792296s Jan 13 17:43:18.489: INFO: Pod "pod-subpath-test-secret-n2qb": Phase="Running", Reason="", readiness=false. Elapsed: 20.092420567s Jan 13 17:43:20.493: INFO: Pod "pod-subpath-test-secret-n2qb": Phase="Running", Reason="", readiness=false. Elapsed: 22.096800578s Jan 13 17:43:22.497: INFO: Pod "pod-subpath-test-secret-n2qb": Phase="Running", Reason="", readiness=false. Elapsed: 24.100820497s Jan 13 17:43:24.501: INFO: Pod "pod-subpath-test-secret-n2qb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.105142266s STEP: Saw pod success Jan 13 17:43:24.501: INFO: Pod "pod-subpath-test-secret-n2qb" satisfied condition "success or failure" Jan 13 17:43:24.505: INFO: Trying to get logs from node hunter-control-plane pod pod-subpath-test-secret-n2qb container test-container-subpath-secret-n2qb: STEP: delete the pod Jan 13 17:43:24.539: INFO: Waiting for pod pod-subpath-test-secret-n2qb to disappear Jan 13 17:43:24.557: INFO: Pod pod-subpath-test-secret-n2qb no longer exists STEP: Deleting pod pod-subpath-test-secret-n2qb Jan 13 17:43:24.557: INFO: Deleting pod "pod-subpath-test-secret-n2qb" in namespace "e2e-tests-subpath-nmjkc" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:43:24.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-nmjkc" for this suite. Jan 13 17:43:32.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:43:32.662: INFO: namespace: e2e-tests-subpath-nmjkc, resource: bindings, ignored listing per whitelist Jan 13 17:43:32.691: INFO: namespace e2e-tests-subpath-nmjkc deletion completed in 8.128581246s • [SLOW TEST:34.511 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:43:32.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 13 17:43:32.838: INFO: Waiting up to 5m0s for pod "downward-api-dad551c2-55c6-11eb-8355-0242ac110009" in namespace "e2e-tests-downward-api-pcs7n" to be "success or failure" Jan 13 17:43:32.845: INFO: Pod "downward-api-dad551c2-55c6-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 7.055923ms Jan 13 17:43:34.849: INFO: Pod "downward-api-dad551c2-55c6-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010954456s Jan 13 17:43:36.855: INFO: Pod "downward-api-dad551c2-55c6-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016740632s STEP: Saw pod success Jan 13 17:43:36.855: INFO: Pod "downward-api-dad551c2-55c6-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:43:36.858: INFO: Trying to get logs from node hunter-control-plane pod downward-api-dad551c2-55c6-11eb-8355-0242ac110009 container dapi-container: STEP: delete the pod Jan 13 17:43:36.894: INFO: Waiting for pod downward-api-dad551c2-55c6-11eb-8355-0242ac110009 to disappear Jan 13 17:43:36.905: INFO: Pod downward-api-dad551c2-55c6-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:43:36.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pcs7n" for this suite. Jan 13 17:43:42.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:43:42.992: INFO: namespace: e2e-tests-downward-api-pcs7n, resource: bindings, ignored listing per whitelist Jan 13 17:43:43.017: INFO: namespace e2e-tests-downward-api-pcs7n deletion completed in 6.109354009s • [SLOW TEST:10.326 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:43:43.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-4wfx STEP: Creating a pod to test atomic-volume-subpath Jan 13 17:43:43.177: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4wfx" in namespace "e2e-tests-subpath-2wltl" to be "success or failure" Jan 13 17:43:43.184: INFO: Pod "pod-subpath-test-configmap-4wfx": Phase="Pending", Reason="", readiness=false. Elapsed: 7.76835ms Jan 13 17:43:45.199: INFO: Pod "pod-subpath-test-configmap-4wfx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022633841s Jan 13 17:43:47.502: INFO: Pod "pod-subpath-test-configmap-4wfx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32550325s Jan 13 17:43:49.506: INFO: Pod "pod-subpath-test-configmap-4wfx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.329159145s Jan 13 17:43:51.510: INFO: Pod "pod-subpath-test-configmap-4wfx": Phase="Running", Reason="", readiness=false. Elapsed: 8.333336138s Jan 13 17:43:53.515: INFO: Pod "pod-subpath-test-configmap-4wfx": Phase="Running", Reason="", readiness=false. Elapsed: 10.33788794s Jan 13 17:43:55.518: INFO: Pod "pod-subpath-test-configmap-4wfx": Phase="Running", Reason="", readiness=false. Elapsed: 12.341753223s Jan 13 17:43:57.540: INFO: Pod "pod-subpath-test-configmap-4wfx": Phase="Running", Reason="", readiness=false. Elapsed: 14.363175992s Jan 13 17:43:59.544: INFO: Pod "pod-subpath-test-configmap-4wfx": Phase="Running", Reason="", readiness=false. Elapsed: 16.367376696s Jan 13 17:44:01.548: INFO: Pod "pod-subpath-test-configmap-4wfx": Phase="Running", Reason="", readiness=false. Elapsed: 18.371490368s Jan 13 17:44:03.553: INFO: Pod "pod-subpath-test-configmap-4wfx": Phase="Running", Reason="", readiness=false. Elapsed: 20.376032444s Jan 13 17:44:05.556: INFO: Pod "pod-subpath-test-configmap-4wfx": Phase="Running", Reason="", readiness=false. Elapsed: 22.379809332s Jan 13 17:44:07.567: INFO: Pod "pod-subpath-test-configmap-4wfx": Phase="Running", Reason="", readiness=false. Elapsed: 24.390090987s Jan 13 17:44:09.571: INFO: Pod "pod-subpath-test-configmap-4wfx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.394674665s STEP: Saw pod success Jan 13 17:44:09.571: INFO: Pod "pod-subpath-test-configmap-4wfx" satisfied condition "success or failure" Jan 13 17:44:09.575: INFO: Trying to get logs from node hunter-control-plane pod pod-subpath-test-configmap-4wfx container test-container-subpath-configmap-4wfx: STEP: delete the pod Jan 13 17:44:09.663: INFO: Waiting for pod pod-subpath-test-configmap-4wfx to disappear Jan 13 17:44:09.666: INFO: Pod pod-subpath-test-configmap-4wfx no longer exists STEP: Deleting pod pod-subpath-test-configmap-4wfx Jan 13 17:44:09.667: INFO: Deleting pod "pod-subpath-test-configmap-4wfx" in namespace "e2e-tests-subpath-2wltl" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:44:09.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-2wltl" for this suite. Jan 13 17:44:15.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:44:15.756: INFO: namespace: e2e-tests-subpath-2wltl, resource: bindings, ignored listing per whitelist Jan 13 17:44:15.770: INFO: namespace e2e-tests-subpath-2wltl deletion completed in 6.099505809s • [SLOW TEST:32.753 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:44:15.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 13 17:44:15.892: INFO: Waiting up to 5m0s for pod "pod-f4805626-55c6-11eb-8355-0242ac110009" in namespace "e2e-tests-emptydir-zz4zr" to be "success or failure" Jan 13 17:44:15.894: INFO: Pod "pod-f4805626-55c6-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.619452ms Jan 13 17:44:17.898: INFO: Pod "pod-f4805626-55c6-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006421795s Jan 13 17:44:19.902: INFO: Pod "pod-f4805626-55c6-11eb-8355-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.009904496s Jan 13 17:44:21.906: INFO: Pod "pod-f4805626-55c6-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013837512s STEP: Saw pod success Jan 13 17:44:21.906: INFO: Pod "pod-f4805626-55c6-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:44:21.908: INFO: Trying to get logs from node hunter-control-plane pod pod-f4805626-55c6-11eb-8355-0242ac110009 container test-container: STEP: delete the pod Jan 13 17:44:21.925: INFO: Waiting for pod pod-f4805626-55c6-11eb-8355-0242ac110009 to disappear Jan 13 17:44:21.930: INFO: Pod pod-f4805626-55c6-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:44:21.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zz4zr" for this suite. Jan 13 17:44:27.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:44:28.023: INFO: namespace: e2e-tests-emptydir-zz4zr, resource: bindings, ignored listing per whitelist Jan 13 17:44:28.040: INFO: namespace e2e-tests-emptydir-zz4zr deletion completed in 6.106428488s • [SLOW TEST:12.269 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:44:28.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 13 17:44:28.200: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-h58bz,SelfLink:/api/v1/namespaces/e2e-tests-watch-h58bz/configmaps/e2e-watch-test-configmap-a,UID:fbd6d8e1-55c6-11eb-9c75-0242ac12000b,ResourceVersion:486903,Generation:0,CreationTimestamp:2021-01-13 17:44:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 13 17:44:28.200: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-h58bz,SelfLink:/api/v1/namespaces/e2e-tests-watch-h58bz/configmaps/e2e-watch-test-configmap-a,UID:fbd6d8e1-55c6-11eb-9c75-0242ac12000b,ResourceVersion:486903,Generation:0,CreationTimestamp:2021-01-13 17:44:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 13 17:44:38.208: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-h58bz,SelfLink:/api/v1/namespaces/e2e-tests-watch-h58bz/configmaps/e2e-watch-test-configmap-a,UID:fbd6d8e1-55c6-11eb-9c75-0242ac12000b,ResourceVersion:486921,Generation:0,CreationTimestamp:2021-01-13 17:44:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 13 17:44:38.208: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-h58bz,SelfLink:/api/v1/namespaces/e2e-tests-watch-h58bz/configmaps/e2e-watch-test-configmap-a,UID:fbd6d8e1-55c6-11eb-9c75-0242ac12000b,ResourceVersion:486921,Generation:0,CreationTimestamp:2021-01-13 17:44:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 13 17:44:48.215: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-h58bz,SelfLink:/api/v1/namespaces/e2e-tests-watch-h58bz/configmaps/e2e-watch-test-configmap-a,UID:fbd6d8e1-55c6-11eb-9c75-0242ac12000b,ResourceVersion:486939,Generation:0,CreationTimestamp:2021-01-13 17:44:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 13 17:44:48.216: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-h58bz,SelfLink:/api/v1/namespaces/e2e-tests-watch-h58bz/configmaps/e2e-watch-test-configmap-a,UID:fbd6d8e1-55c6-11eb-9c75-0242ac12000b,ResourceVersion:486939,Generation:0,CreationTimestamp:2021-01-13 17:44:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 13 17:44:58.223: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-h58bz,SelfLink:/api/v1/namespaces/e2e-tests-watch-h58bz/configmaps/e2e-watch-test-configmap-a,UID:fbd6d8e1-55c6-11eb-9c75-0242ac12000b,ResourceVersion:486957,Generation:0,CreationTimestamp:2021-01-13 17:44:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 13 17:44:58.223: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-h58bz,SelfLink:/api/v1/namespaces/e2e-tests-watch-h58bz/configmaps/e2e-watch-test-configmap-a,UID:fbd6d8e1-55c6-11eb-9c75-0242ac12000b,ResourceVersion:486957,Generation:0,CreationTimestamp:2021-01-13 17:44:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 13 17:45:08.229: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-h58bz,SelfLink:/api/v1/namespaces/e2e-tests-watch-h58bz/configmaps/e2e-watch-test-configmap-b,UID:13b284af-55c7-11eb-9c75-0242ac12000b,ResourceVersion:486975,Generation:0,CreationTimestamp:2021-01-13 17:45:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 13 17:45:08.229: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-h58bz,SelfLink:/api/v1/namespaces/e2e-tests-watch-h58bz/configmaps/e2e-watch-test-configmap-b,UID:13b284af-55c7-11eb-9c75-0242ac12000b,ResourceVersion:486975,Generation:0,CreationTimestamp:2021-01-13 17:45:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 13 17:45:18.708: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-h58bz,SelfLink:/api/v1/namespaces/e2e-tests-watch-h58bz/configmaps/e2e-watch-test-configmap-b,UID:13b284af-55c7-11eb-9c75-0242ac12000b,ResourceVersion:486992,Generation:0,CreationTimestamp:2021-01-13 17:45:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 13 17:45:18.708: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-h58bz,SelfLink:/api/v1/namespaces/e2e-tests-watch-h58bz/configmaps/e2e-watch-test-configmap-b,UID:13b284af-55c7-11eb-9c75-0242ac12000b,ResourceVersion:486992,Generation:0,CreationTimestamp:2021-01-13 17:45:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:45:28.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-h58bz" for this suite. Jan 13 17:45:34.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:45:34.826: INFO: namespace: e2e-tests-watch-h58bz, resource: bindings, ignored listing per whitelist Jan 13 17:45:34.916: INFO: namespace e2e-tests-watch-h58bz deletion completed in 6.202882509s • [SLOW TEST:66.875 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:45:34.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-23b17364-55c7-11eb-8355-0242ac110009 STEP: Creating a pod to test consume configMaps Jan 13 17:45:35.077: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-23b209e9-55c7-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-jxk4k" to be "success or failure" Jan 13 17:45:35.081: INFO: Pod "pod-projected-configmaps-23b209e9-55c7-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.741297ms Jan 13 17:45:37.085: INFO: Pod "pod-projected-configmaps-23b209e9-55c7-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007762154s Jan 13 17:45:39.093: INFO: Pod "pod-projected-configmaps-23b209e9-55c7-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016055531s STEP: Saw pod success Jan 13 17:45:39.093: INFO: Pod "pod-projected-configmaps-23b209e9-55c7-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:45:39.096: INFO: Trying to get logs from node hunter-control-plane pod pod-projected-configmaps-23b209e9-55c7-11eb-8355-0242ac110009 container projected-configmap-volume-test: STEP: delete the pod Jan 13 17:45:39.124: INFO: Waiting for pod pod-projected-configmaps-23b209e9-55c7-11eb-8355-0242ac110009 to disappear Jan 13 17:45:39.163: INFO: Pod pod-projected-configmaps-23b209e9-55c7-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:45:39.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jxk4k" for this suite. Jan 13 17:45:45.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:45:45.277: INFO: namespace: e2e-tests-projected-jxk4k, resource: bindings, ignored listing per whitelist Jan 13 17:45:45.286: INFO: namespace e2e-tests-projected-jxk4k deletion completed in 6.119869531s • [SLOW TEST:10.369 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:45:45.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 13 17:45:45.424: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29dcda94-55c7-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-cb2gx" to be "success or failure" Jan 13 17:45:45.450: INFO: Pod "downwardapi-volume-29dcda94-55c7-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 25.562628ms Jan 13 17:45:47.454: INFO: Pod "downwardapi-volume-29dcda94-55c7-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029609882s Jan 13 17:45:49.458: INFO: Pod "downwardapi-volume-29dcda94-55c7-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033698744s STEP: Saw pod success Jan 13 17:45:49.458: INFO: Pod "downwardapi-volume-29dcda94-55c7-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:45:49.461: INFO: Trying to get logs from node hunter-control-plane pod downwardapi-volume-29dcda94-55c7-11eb-8355-0242ac110009 container client-container: STEP: delete the pod Jan 13 17:45:49.536: INFO: Waiting for pod downwardapi-volume-29dcda94-55c7-11eb-8355-0242ac110009 to disappear Jan 13 17:45:49.569: INFO: Pod downwardapi-volume-29dcda94-55c7-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:45:49.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cb2gx" for this suite. Jan 13 17:45:55.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:45:55.619: INFO: namespace: e2e-tests-projected-cb2gx, resource: bindings, ignored listing per whitelist Jan 13 17:45:55.674: INFO: namespace e2e-tests-projected-cb2gx deletion completed in 6.101936133s • [SLOW TEST:10.388 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:45:55.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 13 17:45:55.867: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 13 17:46:00.871: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 13 17:46:00.871: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 13 17:46:02.875: INFO: Creating deployment "test-rollover-deployment" Jan 13 17:46:02.899: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 13 17:46:04.907: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 13 17:46:04.913: INFO: Ensure that both replica sets have 1 created replica Jan 13 17:46:04.918: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 13 17:46:04.925: INFO: Updating deployment test-rollover-deployment Jan 13 17:46:04.925: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 13 17:46:06.968: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 13 17:46:06.974: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 13 17:46:06.979: INFO: all replica sets need to contain the pod-template-hash label Jan 13 17:46:06.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156762, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156762, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156765, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156762, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 17:46:08.986: INFO: all replica sets need to contain the pod-template-hash label Jan 13 17:46:08.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156762, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156762, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156768, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156762, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 17:46:10.987: INFO: all replica sets need to contain the pod-template-hash label Jan 13 17:46:10.987: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156762, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156762, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156768, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156762, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 17:46:12.989: INFO: all replica sets need to contain the pod-template-hash label Jan 13 17:46:12.989: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156762, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156762, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156768, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156762, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 17:46:14.987: INFO: all replica sets need to contain the pod-template-hash label Jan 13 17:46:14.987: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156762, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156762, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156768, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156762, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 17:46:16.987: INFO: all replica sets need to contain the pod-template-hash label Jan 13 17:46:16.987: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156762, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156762, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156768, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746156762, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 17:46:18.987: INFO: Jan 13 17:46:18.987: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 13 17:46:18.995: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-kkbh9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kkbh9/deployments/test-rollover-deployment,UID:34459623-55c7-11eb-9c75-0242ac12000b,ResourceVersion:487221,Generation:2,CreationTimestamp:2021-01-13 17:46:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2021-01-13 17:46:02 +0000 UTC 2021-01-13 17:46:02 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2021-01-13 17:46:18 +0000 UTC 2021-01-13 17:46:02 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 13 17:46:18.998: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-kkbh9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kkbh9/replicasets/test-rollover-deployment-5b8479fdb6,UID:357e8eac-55c7-11eb-9c75-0242ac12000b,ResourceVersion:487212,Generation:2,CreationTimestamp:2021-01-13 17:46:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 34459623-55c7-11eb-9c75-0242ac12000b 0xc0020ca607 0xc0020ca608}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 13 17:46:18.998: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 13 17:46:18.998: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-kkbh9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kkbh9/replicasets/test-rollover-controller,UID:301314ae-55c7-11eb-9c75-0242ac12000b,ResourceVersion:487220,Generation:2,CreationTimestamp:2021-01-13 17:45:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 34459623-55c7-11eb-9c75-0242ac12000b 0xc0020ca247 0xc0020ca248}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 13 17:46:18.998: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-kkbh9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kkbh9/replicasets/test-rollover-deployment-58494b7559,UID:344a884f-55c7-11eb-9c75-0242ac12000b,ResourceVersion:487180,Generation:2,CreationTimestamp:2021-01-13 17:46:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 34459623-55c7-11eb-9c75-0242ac12000b 0xc0020ca4c7 0xc0020ca4c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 13 17:46:19.001: INFO: Pod "test-rollover-deployment-5b8479fdb6-lf755" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-lf755,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-kkbh9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kkbh9/pods/test-rollover-deployment-5b8479fdb6-lf755,UID:35965daa-55c7-11eb-9c75-0242ac12000b,ResourceVersion:487192,Generation:0,CreationTimestamp:2021-01-13 17:46:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 357e8eac-55c7-11eb-9c75-0242ac12000b 0xc00168c287 0xc00168c288}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zwbfx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwbfx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-zwbfx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00168c350} {node.kubernetes.io/unreachable Exists NoExecute 0xc00168c370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:46:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:46:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:46:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:46:05 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.0.84,StartTime:2021-01-13 17:46:05 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2021-01-13 17:46:08 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://d41e63f8e715ef3860777c66b975eacf21562c86ecf966818cbec3dcbbfed3de}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:46:19.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-kkbh9" for this suite. Jan 13 17:46:27.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:46:27.231: INFO: namespace: e2e-tests-deployment-kkbh9, resource: bindings, ignored listing per whitelist Jan 13 17:46:27.258: INFO: namespace e2e-tests-deployment-kkbh9 deletion completed in 8.255032955s • [SLOW TEST:31.583 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:46:27.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jan 13 17:46:27.371: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 13 17:46:27.385: INFO: Waiting for terminating namespaces to be deleted... Jan 13 17:46:27.387: INFO: Logging pods the kubelet thinks is on node hunter-control-plane before test Jan 13 17:46:27.395: INFO: kindnet-jwsht from kube-system started at 2021-01-10 17:37:15 +0000 UTC (1 container statuses recorded) Jan 13 17:46:27.395: INFO: Container kindnet-cni ready: true, restart count 0 Jan 13 17:46:27.395: INFO: coredns-54ff9cd656-g95ns from kube-system started at 2021-01-10 17:37:34 +0000 UTC (1 container statuses recorded) Jan 13 17:46:27.395: INFO: Container coredns ready: true, restart count 0 Jan 13 17:46:27.395: INFO: local-path-provisioner-65f5ddcc-jw6p2 from local-path-storage started at 2021-01-10 17:37:35 +0000 UTC (1 container statuses recorded) Jan 13 17:46:27.395: INFO: Container local-path-provisioner ready: true, restart count 0 Jan 13 17:46:27.395: INFO: etcd-hunter-control-plane from kube-system started at (0 container statuses recorded) Jan 13 17:46:27.396: INFO: kube-controller-manager-hunter-control-plane from kube-system started at (0 container statuses recorded) Jan 13 17:46:27.396: INFO: chaos-controller-manager-5c78c48d45-lgvrr from default started at 2021-01-11 06:43:21 +0000 UTC (1 container statuses recorded) Jan 13 17:46:27.396: INFO: Container chaos-mesh ready: true, restart count 0 Jan 13 17:46:27.396: INFO: coredns-54ff9cd656-bt7q8 from kube-system started at 2021-01-10 17:37:35 +0000 UTC (1 container statuses recorded) Jan 13 17:46:27.396: INFO: Container coredns ready: true, restart count 0 Jan 13 17:46:27.396: INFO: chaos-daemon-2shrz from default started at 2021-01-11 06:43:21 +0000 UTC (1 container statuses recorded) Jan 13 17:46:27.396: INFO: Container chaos-daemon ready: true, restart count 0 Jan 13 17:46:27.396: INFO: kube-apiserver-hunter-control-plane from kube-system started at (0 container statuses recorded) Jan 13 17:46:27.396: INFO: kube-proxy-dqf89 from kube-system started at 2021-01-10 17:37:15 +0000 UTC (1 container statuses recorded) Jan 13 17:46:27.396: INFO: Container kube-proxy ready: true, restart count 0 Jan 13 17:46:27.396: INFO: kube-scheduler-hunter-control-plane from kube-system started at (0 container statuses recorded) [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-454bb1e7-55c7-11eb-8355-0242ac110009 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-454bb1e7-55c7-11eb-8355-0242ac110009 off the node hunter-control-plane STEP: verifying the node doesn't have the label kubernetes.io/e2e-454bb1e7-55c7-11eb-8355-0242ac110009 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:46:35.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-96xwx" for this suite. Jan 13 17:46:45.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:46:45.622: INFO: namespace: e2e-tests-sched-pred-96xwx, resource: bindings, ignored listing per whitelist Jan 13 17:46:45.699: INFO: namespace e2e-tests-sched-pred-96xwx deletion completed in 10.111865805s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:18.442 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:46:45.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 13 17:46:45.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-5tskv' Jan 13 17:46:45.947: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 13 17:46:45.947: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jan 13 17:46:45.954: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jan 13 17:46:45.983: INFO: scanned /root for discovery docs: Jan 13 17:46:45.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-5tskv' Jan 13 17:47:03.152: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 13 17:47:03.152: INFO: stdout: "Created e2e-test-nginx-rc-e2026453a7b79d3f9c878dcaff84cf5e\nScaling up e2e-test-nginx-rc-e2026453a7b79d3f9c878dcaff84cf5e from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e2026453a7b79d3f9c878dcaff84cf5e up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e2026453a7b79d3f9c878dcaff84cf5e to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jan 13 17:47:03.152: INFO: stdout: "Created e2e-test-nginx-rc-e2026453a7b79d3f9c878dcaff84cf5e\nScaling up e2e-test-nginx-rc-e2026453a7b79d3f9c878dcaff84cf5e from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e2026453a7b79d3f9c878dcaff84cf5e up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e2026453a7b79d3f9c878dcaff84cf5e to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jan 13 17:47:03.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-5tskv' Jan 13 17:47:03.250: INFO: stderr: "" Jan 13 17:47:03.250: INFO: stdout: "e2e-test-nginx-rc-e2026453a7b79d3f9c878dcaff84cf5e-jsksg " Jan 13 17:47:03.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e2026453a7b79d3f9c878dcaff84cf5e-jsksg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5tskv' Jan 13 17:47:03.342: INFO: stderr: "" Jan 13 17:47:03.342: INFO: stdout: "true" Jan 13 17:47:03.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e2026453a7b79d3f9c878dcaff84cf5e-jsksg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5tskv' Jan 13 17:47:03.445: INFO: stderr: "" Jan 13 17:47:03.445: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jan 13 17:47:03.445: INFO: e2e-test-nginx-rc-e2026453a7b79d3f9c878dcaff84cf5e-jsksg is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Jan 13 17:47:03.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-5tskv' Jan 13 17:47:03.561: INFO: stderr: "" Jan 13 17:47:03.561: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:47:03.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5tskv" for this suite. Jan 13 17:47:19.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:47:19.753: INFO: namespace: e2e-tests-kubectl-5tskv, resource: bindings, ignored listing per whitelist Jan 13 17:47:19.809: INFO: namespace e2e-tests-kubectl-5tskv deletion completed in 16.209526776s • [SLOW TEST:34.109 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:47:19.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 13 17:47:26.473: INFO: Successfully updated pod "annotationupdate6231ce28-55c7-11eb-8355-0242ac110009" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:47:28.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ckp97" for this suite. Jan 13 17:47:50.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:47:50.548: INFO: namespace: e2e-tests-downward-api-ckp97, resource: bindings, ignored listing per whitelist Jan 13 17:47:50.608: INFO: namespace e2e-tests-downward-api-ckp97 deletion completed in 22.113286557s • [SLOW TEST:30.799 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:47:50.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 13 17:47:50.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-xjsr7' Jan 13 17:47:50.949: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 13 17:47:50.949: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Jan 13 17:47:53.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-xjsr7' Jan 13 17:47:53.436: INFO: stderr: "" Jan 13 17:47:53.436: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:47:53.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xjsr7" for this suite. Jan 13 17:47:59.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:47:59.825: INFO: namespace: e2e-tests-kubectl-xjsr7, resource: bindings, ignored listing per whitelist Jan 13 17:47:59.884: INFO: namespace e2e-tests-kubectl-xjsr7 deletion completed in 6.174716942s • [SLOW TEST:9.276 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:47:59.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:48:07.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-kfc68" for this suite. Jan 13 17:48:13.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:48:13.357: INFO: namespace: e2e-tests-namespaces-kfc68, resource: bindings, ignored listing per whitelist Jan 13 17:48:13.360: INFO: namespace e2e-tests-namespaces-kfc68 deletion completed in 6.120485978s STEP: Destroying namespace "e2e-tests-nsdeletetest-bgflb" for this suite. Jan 13 17:48:13.363: INFO: Namespace e2e-tests-nsdeletetest-bgflb was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-bcp9q" for this suite. Jan 13 17:48:19.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:48:19.460: INFO: namespace: e2e-tests-nsdeletetest-bcp9q, resource: bindings, ignored listing per whitelist Jan 13 17:48:19.473: INFO: namespace e2e-tests-nsdeletetest-bcp9q deletion completed in 6.110160091s • [SLOW TEST:19.589 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:48:19.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-85c5432d-55c7-11eb-8355-0242ac110009 STEP: Creating a pod to test consume configMaps Jan 13 17:48:19.626: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-85c5c9f8-55c7-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-5dhmj" to be "success or failure" Jan 13 17:48:19.631: INFO: Pod "pod-projected-configmaps-85c5c9f8-55c7-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142905ms Jan 13 17:48:21.634: INFO: Pod "pod-projected-configmaps-85c5c9f8-55c7-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007686722s Jan 13 17:48:23.638: INFO: Pod "pod-projected-configmaps-85c5c9f8-55c7-11eb-8355-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.011349896s Jan 13 17:48:25.642: INFO: Pod "pod-projected-configmaps-85c5c9f8-55c7-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01518893s STEP: Saw pod success Jan 13 17:48:25.642: INFO: Pod "pod-projected-configmaps-85c5c9f8-55c7-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:48:25.644: INFO: Trying to get logs from node hunter-control-plane pod pod-projected-configmaps-85c5c9f8-55c7-11eb-8355-0242ac110009 container projected-configmap-volume-test: STEP: delete the pod Jan 13 17:48:25.673: INFO: Waiting for pod pod-projected-configmaps-85c5c9f8-55c7-11eb-8355-0242ac110009 to disappear Jan 13 17:48:25.678: INFO: Pod pod-projected-configmaps-85c5c9f8-55c7-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:48:25.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5dhmj" for this suite. Jan 13 17:48:31.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:48:31.719: INFO: namespace: e2e-tests-projected-5dhmj, resource: bindings, ignored listing per whitelist Jan 13 17:48:31.802: INFO: namespace e2e-tests-projected-5dhmj deletion completed in 6.12127264s • [SLOW TEST:12.329 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:48:31.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-8d1c9bdb-55c7-11eb-8355-0242ac110009 STEP: Creating secret with name secret-projected-all-test-volume-8d1c9bc1-55c7-11eb-8355-0242ac110009 STEP: Creating a pod to test Check all projections for projected volume plugin Jan 13 17:48:31.961: INFO: Waiting up to 5m0s for pod "projected-volume-8d1c9b68-55c7-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-hsk4f" to be "success or failure" Jan 13 17:48:31.968: INFO: Pod "projected-volume-8d1c9b68-55c7-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224732ms Jan 13 17:48:34.092: INFO: Pod "projected-volume-8d1c9b68-55c7-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130812983s Jan 13 17:48:36.095: INFO: Pod "projected-volume-8d1c9b68-55c7-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.134175989s STEP: Saw pod success Jan 13 17:48:36.096: INFO: Pod "projected-volume-8d1c9b68-55c7-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:48:36.098: INFO: Trying to get logs from node hunter-control-plane pod projected-volume-8d1c9b68-55c7-11eb-8355-0242ac110009 container projected-all-volume-test: STEP: delete the pod Jan 13 17:48:36.161: INFO: Waiting for pod projected-volume-8d1c9b68-55c7-11eb-8355-0242ac110009 to disappear Jan 13 17:48:36.183: INFO: Pod projected-volume-8d1c9b68-55c7-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:48:36.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hsk4f" for this suite. Jan 13 17:48:42.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:48:42.216: INFO: namespace: e2e-tests-projected-hsk4f, resource: bindings, ignored listing per whitelist Jan 13 17:48:42.280: INFO: namespace e2e-tests-projected-hsk4f deletion completed in 6.094200118s • [SLOW TEST:10.478 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:48:42.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-btpkr Jan 13 17:48:46.439: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-btpkr STEP: checking the pod's current state and verifying that restartCount is present Jan 13 17:48:46.442: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:52:47.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-btpkr" for this suite. Jan 13 17:52:53.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:52:53.125: INFO: namespace: e2e-tests-container-probe-btpkr, resource: bindings, ignored listing per whitelist Jan 13 17:52:53.193: INFO: namespace e2e-tests-container-probe-btpkr deletion completed in 6.112480551s • [SLOW TEST:250.913 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:52:53.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-28ea4948-55c8-11eb-8355-0242ac110009 STEP: Creating a pod to test consume configMaps Jan 13 17:52:53.340: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-28ec5ad8-55c8-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-mpdbc" to be "success or failure" Jan 13 17:52:53.345: INFO: Pod "pod-projected-configmaps-28ec5ad8-55c8-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29898ms Jan 13 17:52:55.348: INFO: Pod "pod-projected-configmaps-28ec5ad8-55c8-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007980363s Jan 13 17:52:57.352: INFO: Pod "pod-projected-configmaps-28ec5ad8-55c8-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011422836s STEP: Saw pod success Jan 13 17:52:57.352: INFO: Pod "pod-projected-configmaps-28ec5ad8-55c8-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:52:57.354: INFO: Trying to get logs from node hunter-control-plane pod pod-projected-configmaps-28ec5ad8-55c8-11eb-8355-0242ac110009 container projected-configmap-volume-test: STEP: delete the pod Jan 13 17:52:57.393: INFO: Waiting for pod pod-projected-configmaps-28ec5ad8-55c8-11eb-8355-0242ac110009 to disappear Jan 13 17:52:57.404: INFO: Pod pod-projected-configmaps-28ec5ad8-55c8-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:52:57.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mpdbc" for this suite. Jan 13 17:53:03.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:53:03.455: INFO: namespace: e2e-tests-projected-mpdbc, resource: bindings, ignored listing per whitelist Jan 13 17:53:03.571: INFO: namespace e2e-tests-projected-mpdbc deletion completed in 6.163981979s • [SLOW TEST:10.378 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:53:03.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:53:37.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-fshhg" for this suite. Jan 13 17:53:43.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:53:43.772: INFO: namespace: e2e-tests-container-runtime-fshhg, resource: bindings, ignored listing per whitelist Jan 13 17:53:43.838: INFO: namespace e2e-tests-container-runtime-fshhg deletion completed in 6.115416735s • [SLOW TEST:40.266 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:53:43.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-471a6cbc-55c8-11eb-8355-0242ac110009 STEP: Creating a pod to test consume configMaps Jan 13 17:53:43.979: INFO: Waiting up to 5m0s for pod "pod-configmaps-471b1b14-55c8-11eb-8355-0242ac110009" in namespace "e2e-tests-configmap-rvkcm" to be "success or failure" Jan 13 17:53:43.995: INFO: Pod "pod-configmaps-471b1b14-55c8-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.199378ms Jan 13 17:53:45.999: INFO: Pod "pod-configmaps-471b1b14-55c8-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02009708s Jan 13 17:53:48.003: INFO: Pod "pod-configmaps-471b1b14-55c8-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024444072s STEP: Saw pod success Jan 13 17:53:48.003: INFO: Pod "pod-configmaps-471b1b14-55c8-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:53:48.006: INFO: Trying to get logs from node hunter-control-plane pod pod-configmaps-471b1b14-55c8-11eb-8355-0242ac110009 container configmap-volume-test: STEP: delete the pod Jan 13 17:53:48.026: INFO: Waiting for pod pod-configmaps-471b1b14-55c8-11eb-8355-0242ac110009 to disappear Jan 13 17:53:48.070: INFO: Pod pod-configmaps-471b1b14-55c8-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:53:48.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rvkcm" for this suite. Jan 13 17:53:54.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:53:54.279: INFO: namespace: e2e-tests-configmap-rvkcm, resource: bindings, ignored listing per whitelist Jan 13 17:53:54.327: INFO: namespace e2e-tests-configmap-rvkcm deletion completed in 6.253134964s • [SLOW TEST:10.489 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:53:54.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Jan 13 17:53:54.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pkc2w' Jan 13 17:53:57.207: INFO: stderr: "" Jan 13 17:53:57.207: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Jan 13 17:53:58.211: INFO: Selector matched 1 pods for map[app:redis] Jan 13 17:53:58.211: INFO: Found 0 / 1 Jan 13 17:53:59.212: INFO: Selector matched 1 pods for map[app:redis] Jan 13 17:53:59.212: INFO: Found 0 / 1 Jan 13 17:54:00.212: INFO: Selector matched 1 pods for map[app:redis] Jan 13 17:54:00.212: INFO: Found 0 / 1 Jan 13 17:54:01.213: INFO: Selector matched 1 pods for map[app:redis] Jan 13 17:54:01.214: INFO: Found 1 / 1 Jan 13 17:54:01.214: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 13 17:54:01.218: INFO: Selector matched 1 pods for map[app:redis] Jan 13 17:54:01.218: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jan 13 17:54:01.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zp6bp redis-master --namespace=e2e-tests-kubectl-pkc2w' Jan 13 17:54:01.353: INFO: stderr: "" Jan 13 17:54:01.353: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 13 Jan 17:54:00.182 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 13 Jan 17:54:00.182 # Server started, Redis version 3.2.12\n1:M 13 Jan 17:54:00.182 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 13 Jan 17:54:00.182 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jan 13 17:54:01.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-zp6bp redis-master --namespace=e2e-tests-kubectl-pkc2w --tail=1' Jan 13 17:54:01.470: INFO: stderr: "" Jan 13 17:54:01.470: INFO: stdout: "1:M 13 Jan 17:54:00.182 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jan 13 17:54:01.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-zp6bp redis-master --namespace=e2e-tests-kubectl-pkc2w --limit-bytes=1' Jan 13 17:54:01.588: INFO: stderr: "" Jan 13 17:54:01.588: INFO: stdout: " " STEP: exposing timestamps Jan 13 17:54:01.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-zp6bp redis-master --namespace=e2e-tests-kubectl-pkc2w --tail=1 --timestamps' Jan 13 17:54:01.709: INFO: stderr: "" Jan 13 17:54:01.709: INFO: stdout: "2021-01-13T17:54:00.183038248Z 1:M 13 Jan 17:54:00.182 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jan 13 17:54:04.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-zp6bp redis-master --namespace=e2e-tests-kubectl-pkc2w --since=1s' Jan 13 17:54:04.321: INFO: stderr: "" Jan 13 17:54:04.321: INFO: stdout: "" Jan 13 17:54:04.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-zp6bp redis-master --namespace=e2e-tests-kubectl-pkc2w --since=24h' Jan 13 17:54:04.440: INFO: stderr: "" Jan 13 17:54:04.440: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 13 Jan 17:54:00.182 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 13 Jan 17:54:00.182 # Server started, Redis version 3.2.12\n1:M 13 Jan 17:54:00.182 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 13 Jan 17:54:00.182 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Jan 13 17:54:04.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-pkc2w' Jan 13 17:54:04.543: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 17:54:04.543: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jan 13 17:54:04.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-pkc2w' Jan 13 17:54:04.649: INFO: stderr: "No resources found.\n" Jan 13 17:54:04.649: INFO: stdout: "" Jan 13 17:54:04.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-pkc2w -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 13 17:54:04.743: INFO: stderr: "" Jan 13 17:54:04.743: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:54:04.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pkc2w" for this suite. Jan 13 17:54:11.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:54:11.066: INFO: namespace: e2e-tests-kubectl-pkc2w, resource: bindings, ignored listing per whitelist Jan 13 17:54:11.109: INFO: namespace e2e-tests-kubectl-pkc2w deletion completed in 6.363182044s • [SLOW TEST:16.781 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:54:11.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 13 17:54:11.219: INFO: Waiting up to 5m0s for pod "pod-5756aa22-55c8-11eb-8355-0242ac110009" in namespace "e2e-tests-emptydir-f6qxm" to be "success or failure" Jan 13 17:54:11.249: INFO: Pod "pod-5756aa22-55c8-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 30.708358ms Jan 13 17:54:13.254: INFO: Pod "pod-5756aa22-55c8-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034977374s Jan 13 17:54:15.258: INFO: Pod "pod-5756aa22-55c8-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039555646s STEP: Saw pod success Jan 13 17:54:15.258: INFO: Pod "pod-5756aa22-55c8-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:54:15.262: INFO: Trying to get logs from node hunter-control-plane pod pod-5756aa22-55c8-11eb-8355-0242ac110009 container test-container: STEP: delete the pod Jan 13 17:54:15.284: INFO: Waiting for pod pod-5756aa22-55c8-11eb-8355-0242ac110009 to disappear Jan 13 17:54:15.288: INFO: Pod pod-5756aa22-55c8-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:54:15.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-f6qxm" for this suite. Jan 13 17:54:21.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:54:21.399: INFO: namespace: e2e-tests-emptydir-f6qxm, resource: bindings, ignored listing per whitelist Jan 13 17:54:21.408: INFO: namespace e2e-tests-emptydir-f6qxm deletion completed in 6.116543098s • [SLOW TEST:10.299 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:54:21.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:54:25.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-29lvm" for this suite. Jan 13 17:54:31.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:54:31.748: INFO: namespace: e2e-tests-kubelet-test-29lvm, resource: bindings, ignored listing per whitelist Jan 13 17:54:31.750: INFO: namespace e2e-tests-kubelet-test-29lvm deletion completed in 6.11097222s • [SLOW TEST:10.342 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:54:31.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 13 17:54:39.952: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 17:54:39.973: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 17:54:41.973: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 17:54:41.977: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 17:54:43.973: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 17:54:43.977: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 17:54:45.973: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 17:54:45.977: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 17:54:47.973: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 17:54:47.977: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 17:54:49.973: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 17:54:49.977: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 17:54:51.973: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 17:54:51.977: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 17:54:53.973: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 17:54:53.977: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 17:54:55.973: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 17:54:55.977: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 17:54:57.973: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 17:54:57.978: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 17:54:59.973: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 17:54:59.977: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:54:59.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-v8zfp" for this suite. Jan 13 17:55:22.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:55:22.091: INFO: namespace: e2e-tests-container-lifecycle-hook-v8zfp, resource: bindings, ignored listing per whitelist Jan 13 17:55:22.129: INFO: namespace e2e-tests-container-lifecycle-hook-v8zfp deletion completed in 22.141900772s • [SLOW TEST:50.379 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:55:22.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 13 17:55:26.801: INFO: Successfully updated pod "pod-update-81a902e0-55c8-11eb-8355-0242ac110009" STEP: verifying the updated pod is in kubernetes Jan 13 17:55:26.841: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:55:26.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-gm65v" for this suite. Jan 13 17:55:48.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:55:48.894: INFO: namespace: e2e-tests-pods-gm65v, resource: bindings, ignored listing per whitelist Jan 13 17:55:49.027: INFO: namespace e2e-tests-pods-gm65v deletion completed in 22.183226699s • [SLOW TEST:26.897 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:55:49.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 13 17:55:49.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Jan 13 17:55:49.179: INFO: stderr: "" Jan 13 17:55:49.179: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2021-01-11T14:22:23Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Jan 13 17:55:49.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kj5ld' Jan 13 17:55:49.476: INFO: stderr: "" Jan 13 17:55:49.476: INFO: stdout: "replicationcontroller/redis-master created\n" Jan 13 17:55:49.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kj5ld' Jan 13 17:55:49.772: INFO: stderr: "" Jan 13 17:55:49.772: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jan 13 17:55:50.786: INFO: Selector matched 1 pods for map[app:redis] Jan 13 17:55:50.787: INFO: Found 0 / 1 Jan 13 17:55:51.777: INFO: Selector matched 1 pods for map[app:redis] Jan 13 17:55:51.777: INFO: Found 0 / 1 Jan 13 17:55:52.777: INFO: Selector matched 1 pods for map[app:redis] Jan 13 17:55:52.777: INFO: Found 1 / 1 Jan 13 17:55:52.777: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 13 17:55:52.780: INFO: Selector matched 1 pods for map[app:redis] Jan 13 17:55:52.780: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 13 17:55:52.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-ndq85 --namespace=e2e-tests-kubectl-kj5ld' Jan 13 17:55:52.888: INFO: stderr: "" Jan 13 17:55:52.888: INFO: stdout: "Name: redis-master-ndq85\nNamespace: e2e-tests-kubectl-kj5ld\nPriority: 0\nPriorityClassName: \nNode: hunter-control-plane/172.18.0.11\nStart Time: Wed, 13 Jan 2021 17:55:49 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.0.106\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://e6ce33d0b46fa45555b316e5494481a273ace2d0af7f451478ea73239e98d16d\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 13 Jan 2021 17:55:52 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-hm6bs (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-hm6bs:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-hm6bs\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned e2e-tests-kubectl-kj5ld/redis-master-ndq85 to hunter-control-plane\n Normal Pulled 2s kubelet, hunter-control-plane Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 0s kubelet, hunter-control-plane Created container\n Normal Started 0s kubelet, hunter-control-plane Started container\n" Jan 13 17:55:52.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-kj5ld' Jan 13 17:55:53.021: INFO: stderr: "" Jan 13 17:55:53.022: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-kj5ld\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-ndq85\n" Jan 13 17:55:53.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-kj5ld' Jan 13 17:55:53.129: INFO: stderr: "" Jan 13 17:55:53.129: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-kj5ld\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.96.32.98\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.0.106:6379\nSession Affinity: None\nEvents: \n" Jan 13 17:55:53.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Jan 13 17:55:53.278: INFO: stderr: "" Jan 13 17:55:53.278: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 10 Jan 2021 17:35:59 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 13 Jan 2021 17:55:48 +0000 Sun, 10 Jan 2021 17:35:56 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 13 Jan 2021 17:55:48 +0000 Sun, 10 Jan 2021 17:35:56 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 13 Jan 2021 17:55:48 +0000 Sun, 10 Jan 2021 17:35:56 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 13 Jan 2021 17:55:48 +0000 Sun, 10 Jan 2021 17:37:31 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.11\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: ee75f89380304961947afcb0afb14274\n System UUID: 33af20d3-d015-4e0d-bb95-8b67e712516f\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nProviderID: kind://docker/hunter/hunter-control-plane\nNon-terminated Pods: (12 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n default chaos-controller-manager-5c78c48d45-lgvrr 25m (0%) 0 (0%) 256Mi (0%) 0 (0%) 2d11h\n default chaos-daemon-2shrz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d11h\n e2e-tests-kubectl-kj5ld redis-master-ndq85 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4s\n kube-system coredns-54ff9cd656-bt7q8 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 3d\n kube-system coredns-54ff9cd656-g95ns 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 3d\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d\n kube-system kindnet-jwsht 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 3d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 3d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 3d\n kube-system kube-proxy-dqf89 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 3d\n local-path-storage local-path-provisioner-65f5ddcc-jw6p2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 875m (5%) 100m (0%)\n memory 446Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jan 13 17:55:53.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-kj5ld' Jan 13 17:55:53.385: INFO: stderr: "" Jan 13 17:55:53.385: INFO: stdout: "Name: e2e-tests-kubectl-kj5ld\nLabels: e2e-framework=kubectl\n e2e-run=5ea38b89-55c3-11eb-8355-0242ac110009\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:55:53.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kj5ld" for this suite. Jan 13 17:56:17.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:56:17.447: INFO: namespace: e2e-tests-kubectl-kj5ld, resource: bindings, ignored listing per whitelist Jan 13 17:56:17.501: INFO: namespace e2e-tests-kubectl-kj5ld deletion completed in 24.111985172s • [SLOW TEST:28.473 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:56:17.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 13 17:56:24.682: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:56:25.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-fj9wn" for this suite. Jan 13 17:56:47.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:56:47.875: INFO: namespace: e2e-tests-replicaset-fj9wn, resource: bindings, ignored listing per whitelist Jan 13 17:56:47.961: INFO: namespace e2e-tests-replicaset-fj9wn deletion completed in 22.229978911s • [SLOW TEST:30.460 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:56:47.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 13 17:56:48.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-xq5f6' Jan 13 17:56:48.179: INFO: stderr: "" Jan 13 17:56:48.179: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jan 13 17:56:53.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-xq5f6 -o json' Jan 13 17:56:53.333: INFO: stderr: "" Jan 13 17:56:53.333: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2021-01-13T17:56:48Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-xq5f6\",\n \"resourceVersion\": \"488999\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-xq5f6/pods/e2e-test-nginx-pod\",\n \"uid\": \"b4e455d9-55c8-11eb-9c75-0242ac12000b\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-gqsk5\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-control-plane\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-gqsk5\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-gqsk5\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-13T17:56:48Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-13T17:56:51Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-13T17:56:51Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-13T17:56:48Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://c0bb4436af869379d7edc6e6619e8c21cd3eb0419deffbe04860be61f42dc9b7\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-01-13T17:56:50Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.11\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.0.109\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-01-13T17:56:48Z\"\n }\n}\n" STEP: replace the image in the pod Jan 13 17:56:53.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-xq5f6' Jan 13 17:56:53.591: INFO: stderr: "" Jan 13 17:56:53.591: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Jan 13 17:56:53.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-xq5f6' Jan 13 17:56:59.098: INFO: stderr: "" Jan 13 17:56:59.098: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:56:59.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xq5f6" for this suite. Jan 13 17:57:05.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:57:05.187: INFO: namespace: e2e-tests-kubectl-xq5f6, resource: bindings, ignored listing per whitelist Jan 13 17:57:05.191: INFO: namespace e2e-tests-kubectl-xq5f6 deletion completed in 6.090237328s • [SLOW TEST:17.230 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:57:05.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 13 17:57:05.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 13 17:57:05.437: INFO: stderr: "" Jan 13 17:57:05.437: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2021-01-11T14:22:23Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-09-14T08:26:17Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:57:05.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vdl6f" for this suite. Jan 13 17:57:11.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:57:11.484: INFO: namespace: e2e-tests-kubectl-vdl6f, resource: bindings, ignored listing per whitelist Jan 13 17:57:11.548: INFO: namespace e2e-tests-kubectl-vdl6f deletion completed in 6.107940055s • [SLOW TEST:6.357 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:57:11.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-c2e31390-55c8-11eb-8355-0242ac110009 STEP: Creating a pod to test consume configMaps Jan 13 17:57:11.689: INFO: Waiting up to 5m0s for pod "pod-configmaps-c2e5811e-55c8-11eb-8355-0242ac110009" in namespace "e2e-tests-configmap-bs6n6" to be "success or failure" Jan 13 17:57:11.706: INFO: Pod "pod-configmaps-c2e5811e-55c8-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.467224ms Jan 13 17:57:13.708: INFO: Pod "pod-configmaps-c2e5811e-55c8-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019433461s Jan 13 17:57:15.716: INFO: Pod "pod-configmaps-c2e5811e-55c8-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026773835s STEP: Saw pod success Jan 13 17:57:15.716: INFO: Pod "pod-configmaps-c2e5811e-55c8-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:57:15.718: INFO: Trying to get logs from node hunter-control-plane pod pod-configmaps-c2e5811e-55c8-11eb-8355-0242ac110009 container configmap-volume-test: STEP: delete the pod Jan 13 17:57:15.755: INFO: Waiting for pod pod-configmaps-c2e5811e-55c8-11eb-8355-0242ac110009 to disappear Jan 13 17:57:15.791: INFO: Pod pod-configmaps-c2e5811e-55c8-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:57:15.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-bs6n6" for this suite. Jan 13 17:57:21.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:57:21.885: INFO: namespace: e2e-tests-configmap-bs6n6, resource: bindings, ignored listing per whitelist Jan 13 17:57:21.935: INFO: namespace e2e-tests-configmap-bs6n6 deletion completed in 6.140973401s • [SLOW TEST:10.386 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:57:21.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 13 17:57:22.078: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fcrr9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fcrr9/configmaps/e2e-watch-test-watch-closed,UID:c918f51e-55c8-11eb-9c75-0242ac12000b,ResourceVersion:489116,Generation:0,CreationTimestamp:2021-01-13 17:57:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 13 17:57:22.079: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fcrr9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fcrr9/configmaps/e2e-watch-test-watch-closed,UID:c918f51e-55c8-11eb-9c75-0242ac12000b,ResourceVersion:489117,Generation:0,CreationTimestamp:2021-01-13 17:57:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 13 17:57:22.089: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fcrr9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fcrr9/configmaps/e2e-watch-test-watch-closed,UID:c918f51e-55c8-11eb-9c75-0242ac12000b,ResourceVersion:489118,Generation:0,CreationTimestamp:2021-01-13 17:57:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 13 17:57:22.089: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fcrr9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fcrr9/configmaps/e2e-watch-test-watch-closed,UID:c918f51e-55c8-11eb-9c75-0242ac12000b,ResourceVersion:489119,Generation:0,CreationTimestamp:2021-01-13 17:57:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:57:22.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-fcrr9" for this suite. Jan 13 17:57:28.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:57:28.174: INFO: namespace: e2e-tests-watch-fcrr9, resource: bindings, ignored listing per whitelist Jan 13 17:57:28.203: INFO: namespace e2e-tests-watch-fcrr9 deletion completed in 6.109242291s • [SLOW TEST:6.267 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:57:28.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-ccdd93aa-55c8-11eb-8355-0242ac110009 STEP: Creating secret with name s-test-opt-upd-ccdd943c-55c8-11eb-8355-0242ac110009 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-ccdd93aa-55c8-11eb-8355-0242ac110009 STEP: Updating secret s-test-opt-upd-ccdd943c-55c8-11eb-8355-0242ac110009 STEP: Creating secret with name s-test-opt-create-ccdd9477-55c8-11eb-8355-0242ac110009 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:57:38.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-ntd7h" for this suite. Jan 13 17:58:00.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:58:00.665: INFO: namespace: e2e-tests-secrets-ntd7h, resource: bindings, ignored listing per whitelist Jan 13 17:58:00.693: INFO: namespace e2e-tests-secrets-ntd7h deletion completed in 22.110296309s • [SLOW TEST:32.490 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:58:00.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 13 17:58:00.798: INFO: Creating deployment "test-recreate-deployment" Jan 13 17:58:00.845: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 13 17:58:00.851: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Jan 13 17:58:02.871: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 13 17:58:02.874: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746157480, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746157480, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746157480, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746157480, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 17:58:04.894: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746157480, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746157480, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746157480, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746157480, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 17:58:06.878: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 13 17:58:06.884: INFO: Updating deployment test-recreate-deployment Jan 13 17:58:06.884: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 13 17:58:07.449: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-bjpfs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bjpfs/deployments/test-recreate-deployment,UID:e02fff14-55c8-11eb-9c75-0242ac12000b,ResourceVersion:489291,Generation:2,CreationTimestamp:2021-01-13 17:58:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2021-01-13 17:58:07 +0000 UTC 2021-01-13 17:58:07 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2021-01-13 17:58:07 +0000 UTC 2021-01-13 17:58:00 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jan 13 17:58:07.453: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-bjpfs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bjpfs/replicasets/test-recreate-deployment-589c4bfd,UID:e3e020bb-55c8-11eb-9c75-0242ac12000b,ResourceVersion:489290,Generation:1,CreationTimestamp:2021-01-13 17:58:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e02fff14-55c8-11eb-9c75-0242ac12000b 0xc00158da4f 0xc00158dd10}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 13 17:58:07.453: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 13 17:58:07.453: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-bjpfs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bjpfs/replicasets/test-recreate-deployment-5bf7f65dc,UID:e038363c-55c8-11eb-9c75-0242ac12000b,ResourceVersion:489278,Generation:2,CreationTimestamp:2021-01-13 17:58:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e02fff14-55c8-11eb-9c75-0242ac12000b 0xc00158ddd0 0xc00158ddd1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 13 17:58:07.456: INFO: Pod "test-recreate-deployment-589c4bfd-g66kr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-g66kr,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-bjpfs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjpfs/pods/test-recreate-deployment-589c4bfd-g66kr,UID:e3e14ce1-55c8-11eb-9c75-0242ac12000b,ResourceVersion:489289,Generation:0,CreationTimestamp:2021-01-13 17:58:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd e3e020bb-55c8-11eb-9c75-0242ac12000b 0xc001593c0f 0xc001593c20}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-7vgp2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7vgp2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7vgp2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001593c80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001593ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:58:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:58:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:58:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 17:58:07 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2021-01-13 17:58:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:58:07.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-bjpfs" for this suite. Jan 13 17:58:13.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:58:13.583: INFO: namespace: e2e-tests-deployment-bjpfs, resource: bindings, ignored listing per whitelist Jan 13 17:58:13.636: INFO: namespace e2e-tests-deployment-bjpfs deletion completed in 6.176965549s • [SLOW TEST:12.942 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:58:13.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Jan 13 17:58:13.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-l2prd' Jan 13 17:58:14.019: INFO: stderr: "" Jan 13 17:58:14.019: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 13 17:58:14.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-l2prd' Jan 13 17:58:14.160: INFO: stderr: "" Jan 13 17:58:14.160: INFO: stdout: "update-demo-nautilus-qf6lb update-demo-nautilus-rpnhz " Jan 13 17:58:14.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qf6lb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-l2prd' Jan 13 17:58:14.297: INFO: stderr: "" Jan 13 17:58:14.297: INFO: stdout: "" Jan 13 17:58:14.297: INFO: update-demo-nautilus-qf6lb is created but not running Jan 13 17:58:19.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-l2prd' Jan 13 17:58:19.403: INFO: stderr: "" Jan 13 17:58:19.403: INFO: stdout: "update-demo-nautilus-qf6lb update-demo-nautilus-rpnhz " Jan 13 17:58:19.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qf6lb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-l2prd' Jan 13 17:58:19.495: INFO: stderr: "" Jan 13 17:58:19.495: INFO: stdout: "true" Jan 13 17:58:19.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qf6lb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-l2prd' Jan 13 17:58:19.598: INFO: stderr: "" Jan 13 17:58:19.598: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 13 17:58:19.598: INFO: validating pod update-demo-nautilus-qf6lb Jan 13 17:58:19.602: INFO: got data: { "image": "nautilus.jpg" } Jan 13 17:58:19.602: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 13 17:58:19.602: INFO: update-demo-nautilus-qf6lb is verified up and running Jan 13 17:58:19.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rpnhz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-l2prd' Jan 13 17:58:19.702: INFO: stderr: "" Jan 13 17:58:19.702: INFO: stdout: "true" Jan 13 17:58:19.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rpnhz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-l2prd' Jan 13 17:58:19.801: INFO: stderr: "" Jan 13 17:58:19.801: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 13 17:58:19.801: INFO: validating pod update-demo-nautilus-rpnhz Jan 13 17:58:19.806: INFO: got data: { "image": "nautilus.jpg" } Jan 13 17:58:19.806: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 13 17:58:19.806: INFO: update-demo-nautilus-rpnhz is verified up and running STEP: rolling-update to new replication controller Jan 13 17:58:19.808: INFO: scanned /root for discovery docs: Jan 13 17:58:19.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-l2prd' Jan 13 17:58:42.626: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 13 17:58:42.626: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 13 17:58:42.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-l2prd' Jan 13 17:58:42.741: INFO: stderr: "" Jan 13 17:58:42.741: INFO: stdout: "update-demo-kitten-n8zwq update-demo-kitten-vp6dr " Jan 13 17:58:42.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-n8zwq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-l2prd' Jan 13 17:58:42.837: INFO: stderr: "" Jan 13 17:58:42.837: INFO: stdout: "true" Jan 13 17:58:42.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-n8zwq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-l2prd' Jan 13 17:58:42.938: INFO: stderr: "" Jan 13 17:58:42.938: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 13 17:58:42.938: INFO: validating pod update-demo-kitten-n8zwq Jan 13 17:58:42.951: INFO: got data: { "image": "kitten.jpg" } Jan 13 17:58:42.951: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 13 17:58:42.951: INFO: update-demo-kitten-n8zwq is verified up and running Jan 13 17:58:42.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vp6dr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-l2prd' Jan 13 17:58:43.048: INFO: stderr: "" Jan 13 17:58:43.048: INFO: stdout: "true" Jan 13 17:58:43.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vp6dr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-l2prd' Jan 13 17:58:43.136: INFO: stderr: "" Jan 13 17:58:43.136: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 13 17:58:43.136: INFO: validating pod update-demo-kitten-vp6dr Jan 13 17:58:43.140: INFO: got data: { "image": "kitten.jpg" } Jan 13 17:58:43.140: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 13 17:58:43.140: INFO: update-demo-kitten-vp6dr is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:58:43.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-l2prd" for this suite. Jan 13 17:59:05.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:59:05.190: INFO: namespace: e2e-tests-kubectl-l2prd, resource: bindings, ignored listing per whitelist Jan 13 17:59:05.241: INFO: namespace e2e-tests-kubectl-l2prd deletion completed in 22.09849096s • [SLOW TEST:51.606 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:59:05.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jan 13 17:59:05.331: INFO: namespace e2e-tests-kubectl-7wfwz Jan 13 17:59:05.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7wfwz' Jan 13 17:59:05.636: INFO: stderr: "" Jan 13 17:59:05.636: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 13 17:59:06.782: INFO: Selector matched 1 pods for map[app:redis] Jan 13 17:59:06.782: INFO: Found 0 / 1 Jan 13 17:59:07.765: INFO: Selector matched 1 pods for map[app:redis] Jan 13 17:59:07.765: INFO: Found 0 / 1 Jan 13 17:59:08.648: INFO: Selector matched 1 pods for map[app:redis] Jan 13 17:59:08.648: INFO: Found 0 / 1 Jan 13 17:59:09.662: INFO: Selector matched 1 pods for map[app:redis] Jan 13 17:59:09.662: INFO: Found 1 / 1 Jan 13 17:59:09.662: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 13 17:59:09.668: INFO: Selector matched 1 pods for map[app:redis] Jan 13 17:59:09.668: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 13 17:59:09.668: INFO: wait on redis-master startup in e2e-tests-kubectl-7wfwz Jan 13 17:59:09.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mxqll redis-master --namespace=e2e-tests-kubectl-7wfwz' Jan 13 17:59:09.777: INFO: stderr: "" Jan 13 17:59:09.777: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 13 Jan 17:59:08.734 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 13 Jan 17:59:08.734 # Server started, Redis version 3.2.12\n1:M 13 Jan 17:59:08.734 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 13 Jan 17:59:08.734 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jan 13 17:59:09.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-7wfwz' Jan 13 17:59:09.922: INFO: stderr: "" Jan 13 17:59:09.922: INFO: stdout: "service/rm2 exposed\n" Jan 13 17:59:09.925: INFO: Service rm2 in namespace e2e-tests-kubectl-7wfwz found. STEP: exposing service Jan 13 17:59:11.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-7wfwz' Jan 13 17:59:12.071: INFO: stderr: "" Jan 13 17:59:12.071: INFO: stdout: "service/rm3 exposed\n" Jan 13 17:59:12.104: INFO: Service rm3 in namespace e2e-tests-kubectl-7wfwz found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:59:14.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7wfwz" for this suite. Jan 13 17:59:38.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:59:38.168: INFO: namespace: e2e-tests-kubectl-7wfwz, resource: bindings, ignored listing per whitelist Jan 13 17:59:38.223: INFO: namespace e2e-tests-kubectl-7wfwz deletion completed in 24.10803631s • [SLOW TEST:32.981 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:59:38.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-1a508e89-55c9-11eb-8355-0242ac110009 STEP: Creating a pod to test consume configMaps Jan 13 17:59:38.373: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a54fb4d-55c9-11eb-8355-0242ac110009" in namespace "e2e-tests-configmap-7f7n6" to be "success or failure" Jan 13 17:59:38.385: INFO: Pod "pod-configmaps-1a54fb4d-55c9-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 11.827273ms Jan 13 17:59:40.389: INFO: Pod "pod-configmaps-1a54fb4d-55c9-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01581127s Jan 13 17:59:42.393: INFO: Pod "pod-configmaps-1a54fb4d-55c9-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020281081s STEP: Saw pod success Jan 13 17:59:42.393: INFO: Pod "pod-configmaps-1a54fb4d-55c9-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 17:59:42.396: INFO: Trying to get logs from node hunter-control-plane pod pod-configmaps-1a54fb4d-55c9-11eb-8355-0242ac110009 container configmap-volume-test: STEP: delete the pod Jan 13 17:59:42.416: INFO: Waiting for pod pod-configmaps-1a54fb4d-55c9-11eb-8355-0242ac110009 to disappear Jan 13 17:59:42.420: INFO: Pod pod-configmaps-1a54fb4d-55c9-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 17:59:42.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7f7n6" for this suite. Jan 13 17:59:48.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 17:59:48.509: INFO: namespace: e2e-tests-configmap-7f7n6, resource: bindings, ignored listing per whitelist Jan 13 17:59:48.535: INFO: namespace e2e-tests-configmap-7f7n6 deletion completed in 6.111933051s • [SLOW TEST:10.312 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 17:59:48.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-sj6dt [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-sj6dt STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-sj6dt STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-sj6dt STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-sj6dt Jan 13 17:59:54.720: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-sj6dt, name: ss-0, uid: 23f7591b-55c9-11eb-9c75-0242ac12000b, status phase: Pending. Waiting for statefulset controller to delete. Jan 13 17:59:55.101: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-sj6dt, name: ss-0, uid: 23f7591b-55c9-11eb-9c75-0242ac12000b, status phase: Failed. Waiting for statefulset controller to delete. Jan 13 17:59:55.111: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-sj6dt, name: ss-0, uid: 23f7591b-55c9-11eb-9c75-0242ac12000b, status phase: Failed. Waiting for statefulset controller to delete. Jan 13 17:59:55.160: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-sj6dt STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-sj6dt STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-sj6dt and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 13 17:59:59.237: INFO: Deleting all statefulset in ns e2e-tests-statefulset-sj6dt Jan 13 17:59:59.240: INFO: Scaling statefulset ss to 0 Jan 13 18:00:09.258: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 18:00:09.261: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 18:00:09.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-sj6dt" for this suite. Jan 13 18:00:15.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 18:00:15.419: INFO: namespace: e2e-tests-statefulset-sj6dt, resource: bindings, ignored listing per whitelist Jan 13 18:00:15.426: INFO: namespace e2e-tests-statefulset-sj6dt deletion completed in 6.137116268s • [SLOW TEST:26.891 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 18:00:15.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 13 18:00:15.562: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 13 18:00:20.567: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 18:00:21.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-jlfpk" for this suite. Jan 13 18:00:27.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 18:00:27.631: INFO: namespace: e2e-tests-replication-controller-jlfpk, resource: bindings, ignored listing per whitelist Jan 13 18:00:27.692: INFO: namespace e2e-tests-replication-controller-jlfpk deletion completed in 6.100229263s • [SLOW TEST:12.265 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 18:00:27.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 13 18:00:27.915: INFO: Waiting up to 5m0s for pod "downwardapi-volume-37d8ff9c-55c9-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-jvdmn" to be "success or failure" Jan 13 18:00:27.919: INFO: Pod "downwardapi-volume-37d8ff9c-55c9-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.598852ms Jan 13 18:00:29.922: INFO: Pod "downwardapi-volume-37d8ff9c-55c9-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007190807s Jan 13 18:00:31.927: INFO: Pod "downwardapi-volume-37d8ff9c-55c9-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011590804s STEP: Saw pod success Jan 13 18:00:31.927: INFO: Pod "downwardapi-volume-37d8ff9c-55c9-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 18:00:31.931: INFO: Trying to get logs from node hunter-control-plane pod downwardapi-volume-37d8ff9c-55c9-11eb-8355-0242ac110009 container client-container: STEP: delete the pod Jan 13 18:00:31.965: INFO: Waiting for pod downwardapi-volume-37d8ff9c-55c9-11eb-8355-0242ac110009 to disappear Jan 13 18:00:31.972: INFO: Pod downwardapi-volume-37d8ff9c-55c9-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 18:00:31.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jvdmn" for this suite. Jan 13 18:00:38.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 18:00:38.087: INFO: namespace: e2e-tests-projected-jvdmn, resource: bindings, ignored listing per whitelist Jan 13 18:00:38.101: INFO: namespace e2e-tests-projected-jvdmn deletion completed in 6.126930933s • [SLOW TEST:10.410 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 18:00:38.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 13 18:00:38.239: INFO: Waiting up to 5m0s for pod "downward-api-3e03740d-55c9-11eb-8355-0242ac110009" in namespace "e2e-tests-downward-api-kzhbb" to be "success or failure" Jan 13 18:00:38.272: INFO: Pod "downward-api-3e03740d-55c9-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 32.996275ms Jan 13 18:00:40.277: INFO: Pod "downward-api-3e03740d-55c9-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037655466s Jan 13 18:00:42.281: INFO: Pod "downward-api-3e03740d-55c9-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041857933s STEP: Saw pod success Jan 13 18:00:42.281: INFO: Pod "downward-api-3e03740d-55c9-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 18:00:42.284: INFO: Trying to get logs from node hunter-control-plane pod downward-api-3e03740d-55c9-11eb-8355-0242ac110009 container dapi-container: STEP: delete the pod Jan 13 18:00:42.350: INFO: Waiting for pod downward-api-3e03740d-55c9-11eb-8355-0242ac110009 to disappear Jan 13 18:00:42.362: INFO: Pod downward-api-3e03740d-55c9-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 18:00:42.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-kzhbb" for this suite. Jan 13 18:00:48.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 18:00:48.430: INFO: namespace: e2e-tests-downward-api-kzhbb, resource: bindings, ignored listing per whitelist Jan 13 18:00:48.536: INFO: namespace e2e-tests-downward-api-kzhbb deletion completed in 6.171075639s • [SLOW TEST:10.434 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 18:00:48.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Jan 13 18:00:48.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cwzbl' Jan 13 18:00:48.920: INFO: stderr: "" Jan 13 18:00:48.920: INFO: stdout: "pod/pause created\n" Jan 13 18:00:48.920: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 13 18:00:48.920: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-cwzbl" to be "running and ready" Jan 13 18:00:48.925: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.683448ms Jan 13 18:00:50.930: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009123597s Jan 13 18:00:52.933: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.01287s Jan 13 18:00:52.933: INFO: Pod "pause" satisfied condition "running and ready" Jan 13 18:00:52.933: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Jan 13 18:00:52.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-cwzbl' Jan 13 18:00:53.052: INFO: stderr: "" Jan 13 18:00:53.052: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 13 18:00:53.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-cwzbl' Jan 13 18:00:53.172: INFO: stderr: "" Jan 13 18:00:53.172: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 13 18:00:53.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-cwzbl' Jan 13 18:00:53.275: INFO: stderr: "" Jan 13 18:00:53.275: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 13 18:00:53.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-cwzbl' Jan 13 18:00:53.390: INFO: stderr: "" Jan 13 18:00:53.390: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Jan 13 18:00:53.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-cwzbl' Jan 13 18:00:53.546: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 18:00:53.546: INFO: stdout: "pod \"pause\" force deleted\n" Jan 13 18:00:53.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-cwzbl' Jan 13 18:00:53.763: INFO: stderr: "No resources found.\n" Jan 13 18:00:53.763: INFO: stdout: "" Jan 13 18:00:53.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-cwzbl -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 13 18:00:53.909: INFO: stderr: "" Jan 13 18:00:53.909: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 18:00:53.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cwzbl" for this suite. Jan 13 18:00:59.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 18:01:00.014: INFO: namespace: e2e-tests-kubectl-cwzbl, resource: bindings, ignored listing per whitelist Jan 13 18:01:00.050: INFO: namespace e2e-tests-kubectl-cwzbl deletion completed in 6.137676197s • [SLOW TEST:11.514 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 18:01:00.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-4b15f739-55c9-11eb-8355-0242ac110009 STEP: Creating a pod to test consume configMaps Jan 13 18:01:00.173: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4b181230-55c9-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-nm7kk" to be "success or failure" Jan 13 18:01:00.177: INFO: Pod "pod-projected-configmaps-4b181230-55c9-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.241584ms Jan 13 18:01:02.181: INFO: Pod "pod-projected-configmaps-4b181230-55c9-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008383581s Jan 13 18:01:04.185: INFO: Pod "pod-projected-configmaps-4b181230-55c9-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012655193s STEP: Saw pod success Jan 13 18:01:04.186: INFO: Pod "pod-projected-configmaps-4b181230-55c9-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 18:01:04.188: INFO: Trying to get logs from node hunter-control-plane pod pod-projected-configmaps-4b181230-55c9-11eb-8355-0242ac110009 container projected-configmap-volume-test: STEP: delete the pod Jan 13 18:01:04.202: INFO: Waiting for pod pod-projected-configmaps-4b181230-55c9-11eb-8355-0242ac110009 to disappear Jan 13 18:01:04.206: INFO: Pod pod-projected-configmaps-4b181230-55c9-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 18:01:04.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nm7kk" for this suite. Jan 13 18:01:10.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 18:01:10.319: INFO: namespace: e2e-tests-projected-nm7kk, resource: bindings, ignored listing per whitelist Jan 13 18:01:10.326: INFO: namespace e2e-tests-projected-nm7kk deletion completed in 6.113225866s • [SLOW TEST:10.277 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 18:01:10.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 13 18:01:10.435: INFO: Waiting up to 5m0s for pod "pod-51376ae6-55c9-11eb-8355-0242ac110009" in namespace "e2e-tests-emptydir-t92tv" to be "success or failure" Jan 13 18:01:10.487: INFO: Pod "pod-51376ae6-55c9-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 52.039951ms Jan 13 18:01:12.490: INFO: Pod "pod-51376ae6-55c9-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054910987s Jan 13 18:01:15.333: INFO: Pod "pod-51376ae6-55c9-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.897136619s STEP: Saw pod success Jan 13 18:01:15.333: INFO: Pod "pod-51376ae6-55c9-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 18:01:15.335: INFO: Trying to get logs from node hunter-control-plane pod pod-51376ae6-55c9-11eb-8355-0242ac110009 container test-container: STEP: delete the pod Jan 13 18:01:15.536: INFO: Waiting for pod pod-51376ae6-55c9-11eb-8355-0242ac110009 to disappear Jan 13 18:01:15.560: INFO: Pod pod-51376ae6-55c9-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 18:01:15.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-t92tv" for this suite. Jan 13 18:01:21.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 18:01:21.638: INFO: namespace: e2e-tests-emptydir-t92tv, resource: bindings, ignored listing per whitelist Jan 13 18:01:21.689: INFO: namespace e2e-tests-emptydir-t92tv deletion completed in 6.12656942s • [SLOW TEST:11.363 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 18:01:21.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 13 18:01:21.844: INFO: Waiting up to 5m0s for pod "pod-57fa0c1c-55c9-11eb-8355-0242ac110009" in namespace "e2e-tests-emptydir-4jmhk" to be "success or failure" Jan 13 18:01:21.866: INFO: Pod "pod-57fa0c1c-55c9-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 22.599706ms Jan 13 18:01:23.907: INFO: Pod "pod-57fa0c1c-55c9-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062776019s Jan 13 18:01:25.910: INFO: Pod "pod-57fa0c1c-55c9-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066375049s STEP: Saw pod success Jan 13 18:01:25.910: INFO: Pod "pod-57fa0c1c-55c9-11eb-8355-0242ac110009" satisfied condition "success or failure" Jan 13 18:01:25.913: INFO: Trying to get logs from node hunter-control-plane pod pod-57fa0c1c-55c9-11eb-8355-0242ac110009 container test-container: STEP: delete the pod Jan 13 18:01:25.927: INFO: Waiting for pod pod-57fa0c1c-55c9-11eb-8355-0242ac110009 to disappear Jan 13 18:01:25.966: INFO: Pod pod-57fa0c1c-55c9-11eb-8355-0242ac110009 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 18:01:25.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4jmhk" for this suite. Jan 13 18:01:31.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 18:01:32.009: INFO: namespace: e2e-tests-emptydir-4jmhk, resource: bindings, ignored listing per whitelist Jan 13 18:01:32.065: INFO: namespace e2e-tests-emptydir-4jmhk deletion completed in 6.095773809s • [SLOW TEST:10.376 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 18:01:32.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-9n8g7 I0113 18:01:32.208719 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-9n8g7, replica count: 1 I0113 18:01:33.259151 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 18:01:34.259369 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 18:01:35.259636 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 18:01:36.259855 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 18:01:36.402: INFO: Created: latency-svc-ckq2n Jan 13 18:01:36.421: INFO: Got endpoints: latency-svc-ckq2n [61.456155ms] Jan 13 18:01:36.510: INFO: Created: latency-svc-dsplz Jan 13 18:01:36.525: INFO: Got endpoints: latency-svc-dsplz [103.760432ms] Jan 13 18:01:36.558: INFO: Created: latency-svc-lq7bg Jan 13 18:01:36.597: INFO: Got endpoints: latency-svc-lq7bg [175.989877ms] Jan 13 18:01:36.612: INFO: Created: latency-svc-b9xb8 Jan 13 18:01:36.630: INFO: Got endpoints: latency-svc-b9xb8 [208.385009ms] Jan 13 18:01:36.648: INFO: Created: latency-svc-8sgjn Jan 13 18:01:36.666: INFO: Got endpoints: latency-svc-8sgjn [244.527322ms] Jan 13 18:01:36.689: INFO: Created: latency-svc-p4hg5 Jan 13 18:01:36.739: INFO: Got endpoints: latency-svc-p4hg5 [317.973651ms] Jan 13 18:01:36.773: INFO: Created: latency-svc-f7s4l Jan 13 18:01:36.795: INFO: Got endpoints: latency-svc-f7s4l [373.074026ms] Jan 13 18:01:36.815: INFO: Created: latency-svc-nnbz6 Jan 13 18:01:36.825: INFO: Got endpoints: latency-svc-nnbz6 [403.86435ms] Jan 13 18:01:36.891: INFO: Created: latency-svc-rk8xm Jan 13 18:01:36.917: INFO: Got endpoints: latency-svc-rk8xm [495.912015ms] Jan 13 18:01:36.918: INFO: Created: latency-svc-jrglb Jan 13 18:01:36.947: INFO: Got endpoints: latency-svc-jrglb [525.922346ms] Jan 13 18:01:37.039: INFO: Created: latency-svc-mcng7 Jan 13 18:01:37.067: INFO: Got endpoints: latency-svc-mcng7 [645.049346ms] Jan 13 18:01:37.067: INFO: Created: latency-svc-bztxd Jan 13 18:01:37.090: INFO: Got endpoints: latency-svc-bztxd [668.939601ms] Jan 13 18:01:37.115: INFO: Created: latency-svc-22c8q Jan 13 18:01:37.125: INFO: Got endpoints: latency-svc-22c8q [703.321791ms] Jan 13 18:01:37.139: INFO: Created: latency-svc-r4554 Jan 13 18:01:37.182: INFO: Got endpoints: latency-svc-r4554 [760.663098ms] Jan 13 18:01:37.205: INFO: Created: latency-svc-4pbxp Jan 13 18:01:37.244: INFO: Got endpoints: latency-svc-4pbxp [822.227027ms] Jan 13 18:01:37.259: INFO: Created: latency-svc-p7jt8 Jan 13 18:01:37.273: INFO: Got endpoints: latency-svc-p7jt8 [851.802633ms] Jan 13 18:01:37.326: INFO: Created: latency-svc-2pvm2 Jan 13 18:01:37.360: INFO: Got endpoints: latency-svc-2pvm2 [835.181192ms] Jan 13 18:01:37.361: INFO: Created: latency-svc-jcmrf Jan 13 18:01:37.375: INFO: Got endpoints: latency-svc-jcmrf [777.944076ms] Jan 13 18:01:37.397: INFO: Created: latency-svc-t6kvp Jan 13 18:01:37.411: INFO: Got endpoints: latency-svc-t6kvp [781.374991ms] Jan 13 18:01:37.496: INFO: Created: latency-svc-z687w Jan 13 18:01:37.554: INFO: Created: latency-svc-7hxtl Jan 13 18:01:37.554: INFO: Got endpoints: latency-svc-z687w [887.774047ms] Jan 13 18:01:37.567: INFO: Got endpoints: latency-svc-7hxtl [827.728396ms] Jan 13 18:01:37.590: INFO: Created: latency-svc-7rxpd Jan 13 18:01:37.649: INFO: Got endpoints: latency-svc-7rxpd [854.895195ms] Jan 13 18:01:37.672: INFO: Created: latency-svc-tgwd7 Jan 13 18:01:37.682: INFO: Got endpoints: latency-svc-tgwd7 [856.860238ms] Jan 13 18:01:37.708: INFO: Created: latency-svc-28s7z Jan 13 18:01:37.718: INFO: Got endpoints: latency-svc-28s7z [800.203795ms] Jan 13 18:01:37.739: INFO: Created: latency-svc-82jvg Jan 13 18:01:37.748: INFO: Got endpoints: latency-svc-82jvg [800.420795ms] Jan 13 18:01:37.790: INFO: Created: latency-svc-tsdnx Jan 13 18:01:37.802: INFO: Got endpoints: latency-svc-tsdnx [735.146686ms] Jan 13 18:01:37.823: INFO: Created: latency-svc-m7r68 Jan 13 18:01:37.838: INFO: Got endpoints: latency-svc-m7r68 [747.566528ms] Jan 13 18:01:37.865: INFO: Created: latency-svc-xpb8n Jan 13 18:01:37.919: INFO: Got endpoints: latency-svc-xpb8n [794.023732ms] Jan 13 18:01:37.943: INFO: Created: latency-svc-lwqxs Jan 13 18:01:37.962: INFO: Got endpoints: latency-svc-lwqxs [780.073675ms] Jan 13 18:01:37.990: INFO: Created: latency-svc-gn68f Jan 13 18:01:38.017: INFO: Got endpoints: latency-svc-gn68f [772.724048ms] Jan 13 18:01:38.051: INFO: Created: latency-svc-hfcfv Jan 13 18:01:38.064: INFO: Got endpoints: latency-svc-hfcfv [790.676824ms] Jan 13 18:01:38.087: INFO: Created: latency-svc-csl7w Jan 13 18:01:38.101: INFO: Got endpoints: latency-svc-csl7w [740.299512ms] Jan 13 18:01:38.124: INFO: Created: latency-svc-ggmv4 Jan 13 18:01:38.136: INFO: Got endpoints: latency-svc-ggmv4 [761.114321ms] Jan 13 18:01:38.182: INFO: Created: latency-svc-cns4g Jan 13 18:01:38.236: INFO: Got endpoints: latency-svc-cns4g [825.032537ms] Jan 13 18:01:38.236: INFO: Created: latency-svc-bjbgn Jan 13 18:01:38.256: INFO: Got endpoints: latency-svc-bjbgn [702.455274ms] Jan 13 18:01:38.279: INFO: Created: latency-svc-8kwcq Jan 13 18:01:38.316: INFO: Got endpoints: latency-svc-8kwcq [748.824863ms] Jan 13 18:01:38.339: INFO: Created: latency-svc-tnsc7 Jan 13 18:01:38.359: INFO: Got endpoints: latency-svc-tnsc7 [709.612299ms] Jan 13 18:01:38.393: INFO: Created: latency-svc-9mrc5 Jan 13 18:01:38.413: INFO: Got endpoints: latency-svc-9mrc5 [730.848794ms] Jan 13 18:01:38.530: INFO: Created: latency-svc-hxgvq Jan 13 18:01:38.561: INFO: Got endpoints: latency-svc-hxgvq [843.056242ms] Jan 13 18:01:38.561: INFO: Created: latency-svc-fg4w5 Jan 13 18:01:38.591: INFO: Got endpoints: latency-svc-fg4w5 [843.050096ms] Jan 13 18:01:38.676: INFO: Created: latency-svc-gfshh Jan 13 18:01:38.704: INFO: Got endpoints: latency-svc-gfshh [901.864671ms] Jan 13 18:01:38.704: INFO: Created: latency-svc-fvz7f Jan 13 18:01:38.724: INFO: Got endpoints: latency-svc-fvz7f [886.083516ms] Jan 13 18:01:38.759: INFO: Created: latency-svc-2lf8z Jan 13 18:01:38.841: INFO: Got endpoints: latency-svc-2lf8z [922.210635ms] Jan 13 18:01:38.844: INFO: Created: latency-svc-g2ths Jan 13 18:01:38.861: INFO: Got endpoints: latency-svc-g2ths [898.576417ms] Jan 13 18:01:38.885: INFO: Created: latency-svc-4ld64 Jan 13 18:01:38.903: INFO: Got endpoints: latency-svc-4ld64 [886.51911ms] Jan 13 18:01:38.994: INFO: Created: latency-svc-7jf7k Jan 13 18:01:39.017: INFO: Created: latency-svc-v8ktx Jan 13 18:01:39.017: INFO: Got endpoints: latency-svc-7jf7k [952.469033ms] Jan 13 18:01:39.053: INFO: Got endpoints: latency-svc-v8ktx [952.102303ms] Jan 13 18:01:39.083: INFO: Created: latency-svc-pn6js Jan 13 18:01:39.135: INFO: Got endpoints: latency-svc-pn6js [998.302006ms] Jan 13 18:01:39.142: INFO: Created: latency-svc-qlzw9 Jan 13 18:01:39.155: INFO: Got endpoints: latency-svc-qlzw9 [918.070247ms] Jan 13 18:01:39.178: INFO: Created: latency-svc-c7tcw Jan 13 18:01:39.192: INFO: Got endpoints: latency-svc-c7tcw [935.487657ms] Jan 13 18:01:39.214: INFO: Created: latency-svc-2bz2f Jan 13 18:01:39.228: INFO: Got endpoints: latency-svc-2bz2f [911.78642ms] Jan 13 18:01:39.281: INFO: Created: latency-svc-frp7v Jan 13 18:01:39.311: INFO: Got endpoints: latency-svc-frp7v [951.357153ms] Jan 13 18:01:39.312: INFO: Created: latency-svc-89pck Jan 13 18:01:39.347: INFO: Got endpoints: latency-svc-89pck [933.848227ms] Jan 13 18:01:39.416: INFO: Created: latency-svc-z6pms Jan 13 18:01:39.454: INFO: Got endpoints: latency-svc-z6pms [892.627276ms] Jan 13 18:01:39.454: INFO: Created: latency-svc-dxghk Jan 13 18:01:39.474: INFO: Got endpoints: latency-svc-dxghk [882.916688ms] Jan 13 18:01:39.502: INFO: Created: latency-svc-kgfpn Jan 13 18:01:39.556: INFO: Got endpoints: latency-svc-kgfpn [851.92691ms] Jan 13 18:01:39.574: INFO: Created: latency-svc-82c8c Jan 13 18:01:39.592: INFO: Got endpoints: latency-svc-82c8c [868.263054ms] Jan 13 18:01:39.617: INFO: Created: latency-svc-5sjsb Jan 13 18:01:39.640: INFO: Got endpoints: latency-svc-5sjsb [798.951767ms] Jan 13 18:01:39.716: INFO: Created: latency-svc-f74hr Jan 13 18:01:39.778: INFO: Got endpoints: latency-svc-f74hr [916.764767ms] Jan 13 18:01:39.778: INFO: Created: latency-svc-2f4tn Jan 13 18:01:39.795: INFO: Got endpoints: latency-svc-2f4tn [892.095868ms] Jan 13 18:01:39.814: INFO: Created: latency-svc-k6wrv Jan 13 18:01:39.867: INFO: Got endpoints: latency-svc-k6wrv [850.35868ms] Jan 13 18:01:39.899: INFO: Created: latency-svc-jjp7c Jan 13 18:01:39.917: INFO: Got endpoints: latency-svc-jjp7c [864.175303ms] Jan 13 18:01:39.934: INFO: Created: latency-svc-pdc7g Jan 13 18:01:39.953: INFO: Got endpoints: latency-svc-pdc7g [817.768575ms] Jan 13 18:01:40.009: INFO: Created: latency-svc-l9tsv Jan 13 18:01:40.031: INFO: Created: latency-svc-m57w7 Jan 13 18:01:40.031: INFO: Got endpoints: latency-svc-l9tsv [876.343807ms] Jan 13 18:01:40.048: INFO: Got endpoints: latency-svc-m57w7 [856.364119ms] Jan 13 18:01:40.071: INFO: Created: latency-svc-tw6rh Jan 13 18:01:40.084: INFO: Got endpoints: latency-svc-tw6rh [856.38198ms] Jan 13 18:01:40.108: INFO: Created: latency-svc-cmn8h Jan 13 18:01:40.167: INFO: Got endpoints: latency-svc-cmn8h [855.896841ms] Jan 13 18:01:40.168: INFO: Created: latency-svc-tml2s Jan 13 18:01:40.181: INFO: Got endpoints: latency-svc-tml2s [833.728124ms] Jan 13 18:01:40.204: INFO: Created: latency-svc-fpk9q Jan 13 18:01:40.215: INFO: Got endpoints: latency-svc-fpk9q [761.2186ms] Jan 13 18:01:40.229: INFO: Created: latency-svc-wh8nh Jan 13 18:01:40.251: INFO: Got endpoints: latency-svc-wh8nh [777.162683ms] Jan 13 18:01:40.327: INFO: Created: latency-svc-n2lfx Jan 13 18:01:40.353: INFO: Got endpoints: latency-svc-n2lfx [797.481052ms] Jan 13 18:01:40.354: INFO: Created: latency-svc-68jgx Jan 13 18:01:40.371: INFO: Got endpoints: latency-svc-68jgx [778.14092ms] Jan 13 18:01:40.414: INFO: Created: latency-svc-9vm8j Jan 13 18:01:40.454: INFO: Got endpoints: latency-svc-9vm8j [813.563393ms] Jan 13 18:01:40.493: INFO: Created: latency-svc-bvc95 Jan 13 18:01:40.527: INFO: Got endpoints: latency-svc-bvc95 [748.992286ms] Jan 13 18:01:40.553: INFO: Created: latency-svc-x8g8b Jan 13 18:01:40.614: INFO: Got endpoints: latency-svc-x8g8b [818.40287ms] Jan 13 18:01:40.616: INFO: Created: latency-svc-n7chm Jan 13 18:01:40.623: INFO: Got endpoints: latency-svc-n7chm [755.908852ms] Jan 13 18:01:40.647: INFO: Created: latency-svc-h5zwh Jan 13 18:01:40.660: INFO: Got endpoints: latency-svc-h5zwh [742.520225ms] Jan 13 18:01:40.677: INFO: Created: latency-svc-ljh69 Jan 13 18:01:40.689: INFO: Got endpoints: latency-svc-ljh69 [736.693895ms] Jan 13 18:01:40.708: INFO: Created: latency-svc-wpzn8 Jan 13 18:01:40.747: INFO: Got endpoints: latency-svc-wpzn8 [716.097077ms] Jan 13 18:01:40.762: INFO: Created: latency-svc-flj5z Jan 13 18:01:40.792: INFO: Got endpoints: latency-svc-flj5z [743.774076ms] Jan 13 18:01:40.822: INFO: Created: latency-svc-2rcm4 Jan 13 18:01:40.839: INFO: Got endpoints: latency-svc-2rcm4 [754.858008ms] Jan 13 18:01:40.883: INFO: Created: latency-svc-x5g74 Jan 13 18:01:40.887: INFO: Got endpoints: latency-svc-x5g74 [720.252237ms] Jan 13 18:01:40.911: INFO: Created: latency-svc-b4jrc Jan 13 18:01:40.928: INFO: Got endpoints: latency-svc-b4jrc [747.438834ms] Jan 13 18:01:40.949: INFO: Created: latency-svc-xcc6c Jan 13 18:01:40.970: INFO: Got endpoints: latency-svc-xcc6c [754.984535ms] Jan 13 18:01:41.035: INFO: Created: latency-svc-fj4x9 Jan 13 18:01:41.079: INFO: Got endpoints: latency-svc-fj4x9 [827.553953ms] Jan 13 18:01:41.079: INFO: Created: latency-svc-t656l Jan 13 18:01:41.096: INFO: Got endpoints: latency-svc-t656l [742.348895ms] Jan 13 18:01:41.115: INFO: Created: latency-svc-krpsg Jan 13 18:01:41.158: INFO: Got endpoints: latency-svc-krpsg [787.336478ms] Jan 13 18:01:41.175: INFO: Created: latency-svc-nczvf Jan 13 18:01:41.192: INFO: Got endpoints: latency-svc-nczvf [737.656619ms] Jan 13 18:01:41.218: INFO: Created: latency-svc-lnrcx Jan 13 18:01:41.246: INFO: Got endpoints: latency-svc-lnrcx [718.898579ms] Jan 13 18:01:41.305: INFO: Created: latency-svc-2882w Jan 13 18:01:41.331: INFO: Got endpoints: latency-svc-2882w [717.665029ms] Jan 13 18:01:41.333: INFO: Created: latency-svc-5v6df Jan 13 18:01:41.348: INFO: Got endpoints: latency-svc-5v6df [725.111756ms] Jan 13 18:01:41.367: INFO: Created: latency-svc-kf87w Jan 13 18:01:41.384: INFO: Got endpoints: latency-svc-kf87w [724.400813ms] Jan 13 18:01:41.464: INFO: Created: latency-svc-zxftn Jan 13 18:01:41.499: INFO: Got endpoints: latency-svc-zxftn [809.136043ms] Jan 13 18:01:41.499: INFO: Created: latency-svc-kr74f Jan 13 18:01:41.516: INFO: Got endpoints: latency-svc-kr74f [768.71802ms] Jan 13 18:01:41.559: INFO: Created: latency-svc-twq8z Jan 13 18:01:41.597: INFO: Got endpoints: latency-svc-twq8z [804.953622ms] Jan 13 18:01:41.626: INFO: Created: latency-svc-25tfj Jan 13 18:01:41.660: INFO: Got endpoints: latency-svc-25tfj [820.84084ms] Jan 13 18:01:41.740: INFO: Created: latency-svc-bj5j9 Jan 13 18:01:41.749: INFO: Got endpoints: latency-svc-bj5j9 [861.560707ms] Jan 13 18:01:41.787: INFO: Created: latency-svc-4c9px Jan 13 18:01:41.802: INFO: Got endpoints: latency-svc-4c9px [874.314332ms] Jan 13 18:01:41.823: INFO: Created: latency-svc-nnvjx Jan 13 18:01:41.879: INFO: Got endpoints: latency-svc-nnvjx [908.989677ms] Jan 13 18:01:41.901: INFO: Created: latency-svc-9pc49 Jan 13 18:01:41.911: INFO: Got endpoints: latency-svc-9pc49 [832.091524ms] Jan 13 18:01:41.931: INFO: Created: latency-svc-ldlpt Jan 13 18:01:41.941: INFO: Got endpoints: latency-svc-ldlpt [844.795154ms] Jan 13 18:01:41.955: INFO: Created: latency-svc-95q8x Jan 13 18:01:41.965: INFO: Got endpoints: latency-svc-95q8x [806.479737ms] Jan 13 18:01:42.051: INFO: Created: latency-svc-vqxbq Jan 13 18:01:42.099: INFO: Got endpoints: latency-svc-vqxbq [906.935493ms] Jan 13 18:01:42.099: INFO: Created: latency-svc-x4qzx Jan 13 18:01:42.109: INFO: Got endpoints: latency-svc-x4qzx [863.018682ms] Jan 13 18:01:42.122: INFO: Created: latency-svc-zxfss Jan 13 18:01:42.147: INFO: Got endpoints: latency-svc-zxfss [815.252916ms] Jan 13 18:01:42.203: INFO: Created: latency-svc-zlkx4 Jan 13 18:01:42.211: INFO: Got endpoints: latency-svc-zlkx4 [862.813659ms] Jan 13 18:01:42.231: INFO: Created: latency-svc-q82r4 Jan 13 18:01:42.247: INFO: Got endpoints: latency-svc-q82r4 [862.715795ms] Jan 13 18:01:42.291: INFO: Created: latency-svc-b6vf8 Jan 13 18:01:42.344: INFO: Got endpoints: latency-svc-b6vf8 [845.468355ms] Jan 13 18:01:42.368: INFO: Created: latency-svc-cp67d Jan 13 18:01:42.385: INFO: Got endpoints: latency-svc-cp67d [869.040594ms] Jan 13 18:01:42.422: INFO: Created: latency-svc-vlnf2 Jan 13 18:01:42.472: INFO: Got endpoints: latency-svc-vlnf2 [875.215953ms] Jan 13 18:01:42.495: INFO: Created: latency-svc-w77g9 Jan 13 18:01:42.524: INFO: Got endpoints: latency-svc-w77g9 [864.416358ms] Jan 13 18:01:42.555: INFO: Created: latency-svc-7grrd Jan 13 18:01:42.626: INFO: Got endpoints: latency-svc-7grrd [876.922314ms] Jan 13 18:01:42.680: INFO: Created: latency-svc-g4wfz Jan 13 18:01:42.695: INFO: Got endpoints: latency-svc-g4wfz [892.733838ms] Jan 13 18:01:42.783: INFO: Created: latency-svc-s7fxb Jan 13 18:01:42.800: INFO: Got endpoints: latency-svc-s7fxb [921.430248ms] Jan 13 18:01:42.818: INFO: Created: latency-svc-w4q7p Jan 13 18:01:42.827: INFO: Got endpoints: latency-svc-w4q7p [915.869461ms] Jan 13 18:01:42.842: INFO: Created: latency-svc-gfnzh Jan 13 18:01:42.851: INFO: Got endpoints: latency-svc-gfnzh [910.192051ms] Jan 13 18:01:42.878: INFO: Created: latency-svc-zmx86 Jan 13 18:01:42.937: INFO: Got endpoints: latency-svc-zmx86 [971.976492ms] Jan 13 18:01:42.939: INFO: Created: latency-svc-zlh2l Jan 13 18:01:42.947: INFO: Got endpoints: latency-svc-zlh2l [848.764514ms] Jan 13 18:01:42.967: INFO: Created: latency-svc-j8rjm Jan 13 18:01:42.983: INFO: Got endpoints: latency-svc-j8rjm [874.723716ms] Jan 13 18:01:42.998: INFO: Created: latency-svc-dd272 Jan 13 18:01:43.008: INFO: Got endpoints: latency-svc-dd272 [860.909872ms] Jan 13 18:01:43.028: INFO: Created: latency-svc-cgcdp Jan 13 18:01:43.082: INFO: Got endpoints: latency-svc-cgcdp [871.151414ms] Jan 13 18:01:43.088: INFO: Created: latency-svc-58smv Jan 13 18:01:43.105: INFO: Got endpoints: latency-svc-58smv [858.494321ms] Jan 13 18:01:43.124: INFO: Created: latency-svc-nhtwx Jan 13 18:01:43.140: INFO: Got endpoints: latency-svc-nhtwx [795.437223ms] Jan 13 18:01:43.159: INFO: Created: latency-svc-6phc9 Jan 13 18:01:43.225: INFO: Got endpoints: latency-svc-6phc9 [839.522433ms] Jan 13 18:01:43.238: INFO: Created: latency-svc-sgn9z Jan 13 18:01:43.265: INFO: Got endpoints: latency-svc-sgn9z [792.225837ms] Jan 13 18:01:43.287: INFO: Created: latency-svc-8587d Jan 13 18:01:43.300: INFO: Got endpoints: latency-svc-8587d [775.862761ms] Jan 13 18:01:43.316: INFO: Created: latency-svc-jk5rf Jan 13 18:01:43.346: INFO: Got endpoints: latency-svc-jk5rf [720.27472ms] Jan 13 18:01:43.358: INFO: Created: latency-svc-d5pgt Jan 13 18:01:43.373: INFO: Got endpoints: latency-svc-d5pgt [677.372902ms] Jan 13 18:01:43.389: INFO: Created: latency-svc-lszxf Jan 13 18:01:43.402: INFO: Got endpoints: latency-svc-lszxf [601.539932ms] Jan 13 18:01:43.423: INFO: Created: latency-svc-bhbfk Jan 13 18:01:43.438: INFO: Got endpoints: latency-svc-bhbfk [611.449251ms] Jan 13 18:01:43.506: INFO: Created: latency-svc-2sp2w Jan 13 18:01:43.511: INFO: Got endpoints: latency-svc-2sp2w [660.371644ms] Jan 13 18:01:43.550: INFO: Created: latency-svc-8s6tc Jan 13 18:01:43.559: INFO: Got endpoints: latency-svc-8s6tc [622.476704ms] Jan 13 18:01:43.581: INFO: Created: latency-svc-bjwd2 Jan 13 18:01:43.595: INFO: Got endpoints: latency-svc-bjwd2 [647.255832ms] Jan 13 18:01:43.676: INFO: Created: latency-svc-57nnd Jan 13 18:01:43.698: INFO: Got endpoints: latency-svc-57nnd [714.877287ms] Jan 13 18:01:43.699: INFO: Created: latency-svc-gxblb Jan 13 18:01:43.727: INFO: Got endpoints: latency-svc-gxblb [719.168238ms] Jan 13 18:01:43.753: INFO: Created: latency-svc-ncjb6 Jan 13 18:01:43.762: INFO: Got endpoints: latency-svc-ncjb6 [680.087241ms] Jan 13 18:01:43.811: INFO: Created: latency-svc-tccqn Jan 13 18:01:43.817: INFO: Got endpoints: latency-svc-tccqn [711.440914ms] Jan 13 18:01:43.856: INFO: Created: latency-svc-wt7sb Jan 13 18:01:43.881: INFO: Got endpoints: latency-svc-wt7sb [741.729616ms] Jan 13 18:01:43.981: INFO: Created: latency-svc-qm88q Jan 13 18:01:44.011: INFO: Got endpoints: latency-svc-qm88q [786.363126ms] Jan 13 18:01:44.012: INFO: Created: latency-svc-pdzsp Jan 13 18:01:44.041: INFO: Got endpoints: latency-svc-pdzsp [776.337617ms] Jan 13 18:01:44.129: INFO: Created: latency-svc-zp2wv Jan 13 18:01:44.156: INFO: Got endpoints: latency-svc-zp2wv [855.039627ms] Jan 13 18:01:44.156: INFO: Created: latency-svc-nw8wz Jan 13 18:01:44.187: INFO: Got endpoints: latency-svc-nw8wz [840.798817ms] Jan 13 18:01:44.275: INFO: Created: latency-svc-542pf Jan 13 18:01:44.293: INFO: Got endpoints: latency-svc-542pf [920.399668ms] Jan 13 18:01:44.294: INFO: Created: latency-svc-k7vnk Jan 13 18:01:44.313: INFO: Got endpoints: latency-svc-k7vnk [910.57263ms] Jan 13 18:01:44.341: INFO: Created: latency-svc-d29ww Jan 13 18:01:44.371: INFO: Got endpoints: latency-svc-d29ww [932.805413ms] Jan 13 18:01:44.416: INFO: Created: latency-svc-lz6nh Jan 13 18:01:44.438: INFO: Got endpoints: latency-svc-lz6nh [926.25421ms] Jan 13 18:01:44.438: INFO: Created: latency-svc-z5g25 Jan 13 18:01:44.451: INFO: Got endpoints: latency-svc-z5g25 [892.0831ms] Jan 13 18:01:44.475: INFO: Created: latency-svc-j22dm Jan 13 18:01:44.488: INFO: Got endpoints: latency-svc-j22dm [892.670863ms] Jan 13 18:01:44.580: INFO: Created: latency-svc-j9ltk Jan 13 18:01:44.635: INFO: Got endpoints: latency-svc-j9ltk [936.867144ms] Jan 13 18:01:44.636: INFO: Created: latency-svc-f5zfq Jan 13 18:01:44.655: INFO: Got endpoints: latency-svc-f5zfq [928.327772ms] Jan 13 18:01:44.715: INFO: Created: latency-svc-mdskb Jan 13 18:01:44.721: INFO: Got endpoints: latency-svc-mdskb [958.426584ms] Jan 13 18:01:44.744: INFO: Created: latency-svc-bzx86 Jan 13 18:01:44.756: INFO: Got endpoints: latency-svc-bzx86 [938.985156ms] Jan 13 18:01:44.786: INFO: Created: latency-svc-bl44z Jan 13 18:01:44.798: INFO: Got endpoints: latency-svc-bl44z [916.497045ms] Jan 13 18:01:44.850: INFO: Created: latency-svc-nnwdw Jan 13 18:01:44.869: INFO: Got endpoints: latency-svc-nnwdw [857.440706ms] Jan 13 18:01:44.870: INFO: Created: latency-svc-g2d82 Jan 13 18:01:44.898: INFO: Got endpoints: latency-svc-g2d82 [857.315335ms] Jan 13 18:01:44.929: INFO: Created: latency-svc-dbb84 Jan 13 18:01:44.942: INFO: Got endpoints: latency-svc-dbb84 [786.637901ms] Jan 13 18:01:45.003: INFO: Created: latency-svc-96pfb Jan 13 18:01:45.025: INFO: Got endpoints: latency-svc-96pfb [838.118921ms] Jan 13 18:01:45.026: INFO: Created: latency-svc-cx6mg Jan 13 18:01:45.049: INFO: Got endpoints: latency-svc-cx6mg [755.892992ms] Jan 13 18:01:45.084: INFO: Created: latency-svc-2ffz7 Jan 13 18:01:45.099: INFO: Got endpoints: latency-svc-2ffz7 [785.7583ms] Jan 13 18:01:45.151: INFO: Created: latency-svc-5tl7h Jan 13 18:01:45.170: INFO: Got endpoints: latency-svc-5tl7h [799.151471ms] Jan 13 18:01:45.205: INFO: Created: latency-svc-qspmn Jan 13 18:01:45.218: INFO: Got endpoints: latency-svc-qspmn [780.740448ms] Jan 13 18:01:45.273: INFO: Created: latency-svc-2q4f5 Jan 13 18:01:45.295: INFO: Got endpoints: latency-svc-2q4f5 [843.590229ms] Jan 13 18:01:45.296: INFO: Created: latency-svc-9qksv Jan 13 18:01:45.308: INFO: Got endpoints: latency-svc-9qksv [820.536936ms] Jan 13 18:01:45.343: INFO: Created: latency-svc-p4zsz Jan 13 18:01:45.362: INFO: Got endpoints: latency-svc-p4zsz [726.895736ms] Jan 13 18:01:45.431: INFO: Created: latency-svc-7bmsl Jan 13 18:01:45.457: INFO: Got endpoints: latency-svc-7bmsl [801.226492ms] Jan 13 18:01:45.457: INFO: Created: latency-svc-trdhd Jan 13 18:01:45.475: INFO: Got endpoints: latency-svc-trdhd [754.162409ms] Jan 13 18:01:45.498: INFO: Created: latency-svc-zznf8 Jan 13 18:01:45.517: INFO: Got endpoints: latency-svc-zznf8 [760.894769ms] Jan 13 18:01:45.596: INFO: Created: latency-svc-m7v5s Jan 13 18:01:45.619: INFO: Got endpoints: latency-svc-m7v5s [821.450587ms] Jan 13 18:01:45.621: INFO: Created: latency-svc-99rw8 Jan 13 18:01:45.630: INFO: Got endpoints: latency-svc-99rw8 [761.611111ms] Jan 13 18:01:45.649: INFO: Created: latency-svc-hvnfs Jan 13 18:01:45.661: INFO: Got endpoints: latency-svc-hvnfs [762.309394ms] Jan 13 18:01:45.685: INFO: Created: latency-svc-2rtbh Jan 13 18:01:45.723: INFO: Got endpoints: latency-svc-2rtbh [780.681768ms] Jan 13 18:01:45.744: INFO: Created: latency-svc-hf7sb Jan 13 18:01:45.762: INFO: Got endpoints: latency-svc-hf7sb [737.352554ms] Jan 13 18:01:45.793: INFO: Created: latency-svc-w5rdm Jan 13 18:01:45.806: INFO: Got endpoints: latency-svc-w5rdm [756.875993ms] Jan 13 18:01:45.823: INFO: Created: latency-svc-jj7ts Jan 13 18:01:45.883: INFO: Got endpoints: latency-svc-jj7ts [784.382046ms] Jan 13 18:01:45.895: INFO: Created: latency-svc-6t5qg Jan 13 18:01:45.913: INFO: Got endpoints: latency-svc-6t5qg [742.882502ms] Jan 13 18:01:45.937: INFO: Created: latency-svc-cp46j Jan 13 18:01:45.955: INFO: Got endpoints: latency-svc-cp46j [736.693812ms] Jan 13 18:01:46.035: INFO: Created: latency-svc-mq9gw Jan 13 18:01:46.069: INFO: Got endpoints: latency-svc-mq9gw [773.631547ms] Jan 13 18:01:46.070: INFO: Created: latency-svc-n22vq Jan 13 18:01:46.081: INFO: Got endpoints: latency-svc-n22vq [773.076274ms] Jan 13 18:01:46.099: INFO: Created: latency-svc-2nlqr Jan 13 18:01:46.117: INFO: Got endpoints: latency-svc-2nlqr [754.563509ms] Jan 13 18:01:46.177: INFO: Created: latency-svc-zzdkz Jan 13 18:01:46.206: INFO: Got endpoints: latency-svc-zzdkz [748.971659ms] Jan 13 18:01:46.206: INFO: Created: latency-svc-dfh9c Jan 13 18:01:46.224: INFO: Got endpoints: latency-svc-dfh9c [748.622465ms] Jan 13 18:01:46.243: INFO: Created: latency-svc-v8pk6 Jan 13 18:01:46.260: INFO: Got endpoints: latency-svc-v8pk6 [743.055574ms] Jan 13 18:01:46.371: INFO: Created: latency-svc-jkwc5 Jan 13 18:01:46.393: INFO: Got endpoints: latency-svc-jkwc5 [773.607541ms] Jan 13 18:01:46.394: INFO: Created: latency-svc-c6pst Jan 13 18:01:46.404: INFO: Got endpoints: latency-svc-c6pst [773.693569ms] Jan 13 18:01:46.417: INFO: Created: latency-svc-r5dld Jan 13 18:01:46.427: INFO: Got endpoints: latency-svc-r5dld [766.509286ms] Jan 13 18:01:46.445: INFO: Created: latency-svc-9696c Jan 13 18:01:46.566: INFO: Created: latency-svc-7jxw9 Jan 13 18:01:46.699: INFO: Got endpoints: latency-svc-9696c [976.418757ms] Jan 13 18:01:46.701: INFO: Created: latency-svc-vqtv5 Jan 13 18:01:46.716: INFO: Got endpoints: latency-svc-vqtv5 [909.654928ms] Jan 13 18:01:46.765: INFO: Created: latency-svc-r7x78 Jan 13 18:01:46.765: INFO: Got endpoints: latency-svc-7jxw9 [1.002514709s] Jan 13 18:01:46.853: INFO: Got endpoints: latency-svc-r7x78 [970.077996ms] Jan 13 18:01:46.855: INFO: Created: latency-svc-pgf7w Jan 13 18:01:46.866: INFO: Got endpoints: latency-svc-pgf7w [952.297384ms] Jan 13 18:01:46.884: INFO: Created: latency-svc-r22r4 Jan 13 18:01:46.902: INFO: Got endpoints: latency-svc-r22r4 [946.7199ms] Jan 13 18:01:46.921: INFO: Created: latency-svc-cjg9z Jan 13 18:01:46.938: INFO: Got endpoints: latency-svc-cjg9z [869.194338ms] Jan 13 18:01:46.994: INFO: Created: latency-svc-cps2v Jan 13 18:01:47.015: INFO: Got endpoints: latency-svc-cps2v [933.579842ms] Jan 13 18:01:47.016: INFO: Created: latency-svc-44xf2 Jan 13 18:01:47.033: INFO: Got endpoints: latency-svc-44xf2 [915.844601ms] Jan 13 18:01:47.052: INFO: Created: latency-svc-f5wtv Jan 13 18:01:47.070: INFO: Got endpoints: latency-svc-f5wtv [863.755763ms] Jan 13 18:01:47.135: INFO: Created: latency-svc-ssgsj Jan 13 18:01:47.160: INFO: Got endpoints: latency-svc-ssgsj [936.011264ms] Jan 13 18:01:47.161: INFO: Created: latency-svc-hlw74 Jan 13 18:01:47.177: INFO: Got endpoints: latency-svc-hlw74 [916.648175ms] Jan 13 18:01:47.196: INFO: Created: latency-svc-hfwb2 Jan 13 18:01:47.206: INFO: Got endpoints: latency-svc-hfwb2 [813.083279ms] Jan 13 18:01:47.233: INFO: Created: latency-svc-nmss8 Jan 13 18:01:47.274: INFO: Got endpoints: latency-svc-nmss8 [870.064026ms] Jan 13 18:01:47.291: INFO: Created: latency-svc-vcdw9 Jan 13 18:01:47.309: INFO: Got endpoints: latency-svc-vcdw9 [881.647826ms] Jan 13 18:01:47.327: INFO: Created: latency-svc-zlp5j Jan 13 18:01:47.339: INFO: Got endpoints: latency-svc-zlp5j [639.239271ms] Jan 13 18:01:47.339: INFO: Latencies: [103.760432ms 175.989877ms 208.385009ms 244.527322ms 317.973651ms 373.074026ms 403.86435ms 495.912015ms 525.922346ms 601.539932ms 611.449251ms 622.476704ms 639.239271ms 645.049346ms 647.255832ms 660.371644ms 668.939601ms 677.372902ms 680.087241ms 702.455274ms 703.321791ms 709.612299ms 711.440914ms 714.877287ms 716.097077ms 717.665029ms 718.898579ms 719.168238ms 720.252237ms 720.27472ms 724.400813ms 725.111756ms 726.895736ms 730.848794ms 735.146686ms 736.693812ms 736.693895ms 737.352554ms 737.656619ms 740.299512ms 741.729616ms 742.348895ms 742.520225ms 742.882502ms 743.055574ms 743.774076ms 747.438834ms 747.566528ms 748.622465ms 748.824863ms 748.971659ms 748.992286ms 754.162409ms 754.563509ms 754.858008ms 754.984535ms 755.892992ms 755.908852ms 756.875993ms 760.663098ms 760.894769ms 761.114321ms 761.2186ms 761.611111ms 762.309394ms 766.509286ms 768.71802ms 772.724048ms 773.076274ms 773.607541ms 773.631547ms 773.693569ms 775.862761ms 776.337617ms 777.162683ms 777.944076ms 778.14092ms 780.073675ms 780.681768ms 780.740448ms 781.374991ms 784.382046ms 785.7583ms 786.363126ms 786.637901ms 787.336478ms 790.676824ms 792.225837ms 794.023732ms 795.437223ms 797.481052ms 798.951767ms 799.151471ms 800.203795ms 800.420795ms 801.226492ms 804.953622ms 806.479737ms 809.136043ms 813.083279ms 813.563393ms 815.252916ms 817.768575ms 818.40287ms 820.536936ms 820.84084ms 821.450587ms 822.227027ms 825.032537ms 827.553953ms 827.728396ms 832.091524ms 833.728124ms 835.181192ms 838.118921ms 839.522433ms 840.798817ms 843.050096ms 843.056242ms 843.590229ms 844.795154ms 845.468355ms 848.764514ms 850.35868ms 851.802633ms 851.92691ms 854.895195ms 855.039627ms 855.896841ms 856.364119ms 856.38198ms 856.860238ms 857.315335ms 857.440706ms 858.494321ms 860.909872ms 861.560707ms 862.715795ms 862.813659ms 863.018682ms 863.755763ms 864.175303ms 864.416358ms 868.263054ms 869.040594ms 869.194338ms 870.064026ms 871.151414ms 874.314332ms 874.723716ms 875.215953ms 876.343807ms 876.922314ms 881.647826ms 882.916688ms 886.083516ms 886.51911ms 887.774047ms 892.0831ms 892.095868ms 892.627276ms 892.670863ms 892.733838ms 898.576417ms 901.864671ms 906.935493ms 908.989677ms 909.654928ms 910.192051ms 910.57263ms 911.78642ms 915.844601ms 915.869461ms 916.497045ms 916.648175ms 916.764767ms 918.070247ms 920.399668ms 921.430248ms 922.210635ms 926.25421ms 928.327772ms 932.805413ms 933.579842ms 933.848227ms 935.487657ms 936.011264ms 936.867144ms 938.985156ms 946.7199ms 951.357153ms 952.102303ms 952.297384ms 952.469033ms 958.426584ms 970.077996ms 971.976492ms 976.418757ms 998.302006ms 1.002514709s] Jan 13 18:01:47.339: INFO: 50 %ile: 813.563393ms Jan 13 18:01:47.339: INFO: 90 %ile: 926.25421ms Jan 13 18:01:47.339: INFO: 99 %ile: 998.302006ms Jan 13 18:01:47.339: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 18:01:47.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-9n8g7" for this suite. Jan 13 18:02:21.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 18:02:21.426: INFO: namespace: e2e-tests-svc-latency-9n8g7, resource: bindings, ignored listing per whitelist Jan 13 18:02:21.452: INFO: namespace e2e-tests-svc-latency-9n8g7 deletion completed in 34.106160663s • [SLOW TEST:49.386 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 18:02:21.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 13 18:02:21.746: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"7ba53740-55c9-11eb-9c75-0242ac12000b", Controller:(*bool)(0xc001ac4856), BlockOwnerDeletion:(*bool)(0xc001ac4857)}} Jan 13 18:02:21.763: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"7ba0f427-55c9-11eb-9c75-0242ac12000b", Controller:(*bool)(0xc00180fe42), BlockOwnerDeletion:(*bool)(0xc00180fe43)}} Jan 13 18:02:21.787: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"7ba1939b-55c9-11eb-9c75-0242ac12000b", Controller:(*bool)(0xc001c09c42), BlockOwnerDeletion:(*bool)(0xc001c09c43)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 18:02:26.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-nhjhn" for this suite. Jan 13 18:02:32.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 18:02:32.873: INFO: namespace: e2e-tests-gc-nhjhn, resource: bindings, ignored listing per whitelist Jan 13 18:02:32.951: INFO: namespace e2e-tests-gc-nhjhn deletion completed in 6.108554269s • [SLOW TEST:11.499 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 18:02:32.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 13 18:02:33.135: INFO: Number of nodes with available pods: 0 Jan 13 18:02:33.135: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 18:02:34.143: INFO: Number of nodes with available pods: 0 Jan 13 18:02:34.143: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 18:02:35.144: INFO: Number of nodes with available pods: 0 Jan 13 18:02:35.144: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 18:02:36.143: INFO: Number of nodes with available pods: 1 Jan 13 18:02:36.143: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 13 18:02:36.213: INFO: Number of nodes with available pods: 0 Jan 13 18:02:36.213: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 18:02:37.222: INFO: Number of nodes with available pods: 0 Jan 13 18:02:37.222: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 18:02:38.242: INFO: Number of nodes with available pods: 0 Jan 13 18:02:38.242: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 18:02:39.222: INFO: Number of nodes with available pods: 0 Jan 13 18:02:39.222: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 18:02:40.222: INFO: Number of nodes with available pods: 0 Jan 13 18:02:40.222: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 18:02:41.222: INFO: Number of nodes with available pods: 0 Jan 13 18:02:41.222: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 18:02:42.220: INFO: Number of nodes with available pods: 0 Jan 13 18:02:42.220: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 18:02:43.222: INFO: Number of nodes with available pods: 0 Jan 13 18:02:43.222: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 18:02:44.221: INFO: Number of nodes with available pods: 0 Jan 13 18:02:44.221: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 18:02:45.220: INFO: Number of nodes with available pods: 0 Jan 13 18:02:45.220: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 18:02:46.220: INFO: Number of nodes with available pods: 0 Jan 13 18:02:46.220: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 18:02:47.221: INFO: Number of nodes with available pods: 0 Jan 13 18:02:47.221: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 18:02:48.221: INFO: Number of nodes with available pods: 0 Jan 13 18:02:48.221: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 18:02:49.222: INFO: Number of nodes with available pods: 0 Jan 13 18:02:49.222: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 18:02:50.221: INFO: Number of nodes with available pods: 0 Jan 13 18:02:50.221: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 18:02:51.220: INFO: Number of nodes with available pods: 0 Jan 13 18:02:51.220: INFO: Node hunter-control-plane is running more than one daemon pod Jan 13 18:02:52.221: INFO: Number of nodes with available pods: 1 Jan 13 18:02:52.221: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-j68vc, will wait for the garbage collector to delete the pods Jan 13 18:02:52.284: INFO: Deleting DaemonSet.extensions daemon-set took: 5.968574ms Jan 13 18:02:52.384: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.341969ms Jan 13 18:02:59.088: INFO: Number of nodes with available pods: 0 Jan 13 18:02:59.088: INFO: Number of running nodes: 0, number of available pods: 0 Jan 13 18:02:59.090: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-j68vc/daemonsets","resourceVersion":"491689"},"items":null} Jan 13 18:02:59.093: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-j68vc/pods","resourceVersion":"491689"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 13 18:02:59.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-j68vc" for this suite. Jan 13 18:03:05.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 13 18:03:05.249: INFO: namespace: e2e-tests-daemonsets-j68vc, resource: bindings, ignored listing per whitelist Jan 13 18:03:05.274: INFO: namespace e2e-tests-daemonsets-j68vc deletion completed in 6.172338916s • [SLOW TEST:32.321 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 13 18:03:05.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 13 18:03:05.368: INFO: (0) /api/v1/nodes/hunter-control-plane:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jan 13 18:03:11.731: INFO: Waiting up to 5m0s for pod "client-containers-997e6f35-55c9-11eb-8355-0242ac110009" in namespace "e2e-tests-containers-gt6n8" to be "success or failure"
Jan 13 18:03:11.734: INFO: Pod "client-containers-997e6f35-55c9-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.406074ms
Jan 13 18:03:13.739: INFO: Pod "client-containers-997e6f35-55c9-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007883147s
Jan 13 18:03:15.743: INFO: Pod "client-containers-997e6f35-55c9-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012066822s
STEP: Saw pod success
Jan 13 18:03:15.743: INFO: Pod "client-containers-997e6f35-55c9-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:03:15.746: INFO: Trying to get logs from node hunter-control-plane pod client-containers-997e6f35-55c9-11eb-8355-0242ac110009 container test-container: 
STEP: delete the pod
Jan 13 18:03:15.767: INFO: Waiting for pod client-containers-997e6f35-55c9-11eb-8355-0242ac110009 to disappear
Jan 13 18:03:15.786: INFO: Pod client-containers-997e6f35-55c9-11eb-8355-0242ac110009 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:03:15.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-gt6n8" for this suite.
Jan 13 18:03:21.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:03:21.926: INFO: namespace: e2e-tests-containers-gt6n8, resource: bindings, ignored listing per whitelist
Jan 13 18:03:21.931: INFO: namespace e2e-tests-containers-gt6n8 deletion completed in 6.118933936s

• [SLOW TEST:10.349 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:03:21.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan 13 18:03:22.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-p97tq'
Jan 13 18:03:22.373: INFO: stderr: ""
Jan 13 18:03:22.373: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 13 18:03:22.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-p97tq'
Jan 13 18:03:22.510: INFO: stderr: ""
Jan 13 18:03:22.510: INFO: stdout: "update-demo-nautilus-cbxjt update-demo-nautilus-jjf96 "
Jan 13 18:03:22.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cbxjt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p97tq'
Jan 13 18:03:22.606: INFO: stderr: ""
Jan 13 18:03:22.606: INFO: stdout: ""
Jan 13 18:03:22.606: INFO: update-demo-nautilus-cbxjt is created but not running
Jan 13 18:03:27.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-p97tq'
Jan 13 18:03:27.714: INFO: stderr: ""
Jan 13 18:03:27.714: INFO: stdout: "update-demo-nautilus-cbxjt update-demo-nautilus-jjf96 "
Jan 13 18:03:27.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cbxjt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p97tq'
Jan 13 18:03:27.808: INFO: stderr: ""
Jan 13 18:03:27.809: INFO: stdout: "true"
Jan 13 18:03:27.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cbxjt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p97tq'
Jan 13 18:03:27.907: INFO: stderr: ""
Jan 13 18:03:27.907: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 13 18:03:27.907: INFO: validating pod update-demo-nautilus-cbxjt
Jan 13 18:03:27.911: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 13 18:03:27.911: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 13 18:03:27.911: INFO: update-demo-nautilus-cbxjt is verified up and running
Jan 13 18:03:27.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjf96 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p97tq'
Jan 13 18:03:28.021: INFO: stderr: ""
Jan 13 18:03:28.021: INFO: stdout: "true"
Jan 13 18:03:28.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjf96 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p97tq'
Jan 13 18:03:28.124: INFO: stderr: ""
Jan 13 18:03:28.124: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 13 18:03:28.124: INFO: validating pod update-demo-nautilus-jjf96
Jan 13 18:03:28.129: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 13 18:03:28.129: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 13 18:03:28.129: INFO: update-demo-nautilus-jjf96 is verified up and running
STEP: using delete to clean up resources
Jan 13 18:03:28.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-p97tq'
Jan 13 18:03:28.239: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 13 18:03:28.239: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 13 18:03:28.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-p97tq'
Jan 13 18:03:28.339: INFO: stderr: "No resources found.\n"
Jan 13 18:03:28.339: INFO: stdout: ""
Jan 13 18:03:28.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-p97tq -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 13 18:03:28.429: INFO: stderr: ""
Jan 13 18:03:28.429: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:03:28.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-p97tq" for this suite.
Jan 13 18:03:50.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:03:50.570: INFO: namespace: e2e-tests-kubectl-p97tq, resource: bindings, ignored listing per whitelist
Jan 13 18:03:50.594: INFO: namespace e2e-tests-kubectl-p97tq deletion completed in 22.160997905s

• [SLOW TEST:28.662 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:03:50.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jan 13 18:03:50.718: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan 13 18:03:50.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8b2wf'
Jan 13 18:03:51.029: INFO: stderr: ""
Jan 13 18:03:51.029: INFO: stdout: "service/redis-slave created\n"
Jan 13 18:03:51.029: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan 13 18:03:51.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8b2wf'
Jan 13 18:03:51.395: INFO: stderr: ""
Jan 13 18:03:51.395: INFO: stdout: "service/redis-master created\n"
Jan 13 18:03:51.396: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 13 18:03:51.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8b2wf'
Jan 13 18:03:51.685: INFO: stderr: ""
Jan 13 18:03:51.685: INFO: stdout: "service/frontend created\n"
Jan 13 18:03:51.685: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan 13 18:03:51.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8b2wf'
Jan 13 18:03:51.945: INFO: stderr: ""
Jan 13 18:03:51.945: INFO: stdout: "deployment.extensions/frontend created\n"
Jan 13 18:03:51.945: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 13 18:03:51.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8b2wf'
Jan 13 18:03:52.253: INFO: stderr: ""
Jan 13 18:03:52.253: INFO: stdout: "deployment.extensions/redis-master created\n"
Jan 13 18:03:52.254: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan 13 18:03:52.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8b2wf'
Jan 13 18:03:52.555: INFO: stderr: ""
Jan 13 18:03:52.555: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jan 13 18:03:52.555: INFO: Waiting for all frontend pods to be Running.
Jan 13 18:04:02.606: INFO: Waiting for frontend to serve content.
Jan 13 18:04:02.627: INFO: Trying to add a new entry to the guestbook.
Jan 13 18:04:02.639: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan 13 18:04:02.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8b2wf'
Jan 13 18:04:06.123: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 13 18:04:06.123: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 13 18:04:06.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8b2wf'
Jan 13 18:04:06.324: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 13 18:04:06.324: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 13 18:04:06.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8b2wf'
Jan 13 18:04:06.540: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 13 18:04:06.540: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 13 18:04:06.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8b2wf'
Jan 13 18:04:06.645: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 13 18:04:06.645: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 13 18:04:06.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8b2wf'
Jan 13 18:04:06.796: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 13 18:04:06.797: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 13 18:04:06.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8b2wf'
Jan 13 18:04:08.605: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 13 18:04:08.605: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:04:08.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8b2wf" for this suite.
Jan 13 18:04:48.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:04:48.785: INFO: namespace: e2e-tests-kubectl-8b2wf, resource: bindings, ignored listing per whitelist
Jan 13 18:04:48.855: INFO: namespace e2e-tests-kubectl-8b2wf deletion completed in 40.174534847s

• [SLOW TEST:58.261 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:04:48.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 13 18:04:48.968: INFO: Waiting up to 5m0s for pod "pod-d3778693-55c9-11eb-8355-0242ac110009" in namespace "e2e-tests-emptydir-xrsbd" to be "success or failure"
Jan 13 18:04:48.972: INFO: Pod "pod-d3778693-55c9-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.949146ms
Jan 13 18:04:50.976: INFO: Pod "pod-d3778693-55c9-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007903659s
Jan 13 18:04:53.077: INFO: Pod "pod-d3778693-55c9-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109008055s
STEP: Saw pod success
Jan 13 18:04:53.077: INFO: Pod "pod-d3778693-55c9-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:04:53.080: INFO: Trying to get logs from node hunter-control-plane pod pod-d3778693-55c9-11eb-8355-0242ac110009 container test-container: 
STEP: delete the pod
Jan 13 18:04:53.264: INFO: Waiting for pod pod-d3778693-55c9-11eb-8355-0242ac110009 to disappear
Jan 13 18:04:53.275: INFO: Pod pod-d3778693-55c9-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:04:53.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-xrsbd" for this suite.
Jan 13 18:04:59.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:04:59.366: INFO: namespace: e2e-tests-emptydir-xrsbd, resource: bindings, ignored listing per whitelist
Jan 13 18:04:59.371: INFO: namespace e2e-tests-emptydir-xrsbd deletion completed in 6.093477305s

• [SLOW TEST:10.516 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:04:59.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 13 18:04:59.642: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:05:00.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-5pvls" for this suite.
Jan 13 18:05:06.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:05:06.821: INFO: namespace: e2e-tests-custom-resource-definition-5pvls, resource: bindings, ignored listing per whitelist
Jan 13 18:05:06.837: INFO: namespace e2e-tests-custom-resource-definition-5pvls deletion completed in 6.135316963s

• [SLOW TEST:7.466 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:05:06.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-6xcqm
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-6xcqm to expose endpoints map[]
Jan 13 18:05:07.018: INFO: Get endpoints failed (23.506558ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 13 18:05:08.022: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-6xcqm exposes endpoints map[] (1.027652678s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-6xcqm
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-6xcqm to expose endpoints map[pod1:[80]]
Jan 13 18:05:15.488: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (7.458803255s elapsed, will retry)
Jan 13 18:05:16.493: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-6xcqm exposes endpoints map[pod1:[80]] (8.464055701s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-6xcqm
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-6xcqm to expose endpoints map[pod1:[80] pod2:[80]]
Jan 13 18:05:20.671: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-6xcqm exposes endpoints map[pod1:[80] pod2:[80]] (4.141362797s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-6xcqm
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-6xcqm to expose endpoints map[pod2:[80]]
Jan 13 18:05:21.760: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-6xcqm exposes endpoints map[pod2:[80]] (1.085307758s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-6xcqm
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-6xcqm to expose endpoints map[]
Jan 13 18:05:25.382: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-6xcqm exposes endpoints map[] (3.617763654s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:05:25.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-6xcqm" for this suite.
Jan 13 18:05:31.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:05:31.811: INFO: namespace: e2e-tests-services-6xcqm, resource: bindings, ignored listing per whitelist
Jan 13 18:05:31.823: INFO: namespace e2e-tests-services-6xcqm deletion completed in 6.147301847s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:24.985 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:05:31.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 13 18:05:31.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-lzfvj'
Jan 13 18:05:32.090: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 13 18:05:32.090: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan 13 18:05:32.132: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-ft6cl]
Jan 13 18:05:32.132: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-ft6cl" in namespace "e2e-tests-kubectl-lzfvj" to be "running and ready"
Jan 13 18:05:32.158: INFO: Pod "e2e-test-nginx-rc-ft6cl": Phase="Pending", Reason="", readiness=false. Elapsed: 25.523004ms
Jan 13 18:05:34.161: INFO: Pod "e2e-test-nginx-rc-ft6cl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029288401s
Jan 13 18:05:36.165: INFO: Pod "e2e-test-nginx-rc-ft6cl": Phase="Running", Reason="", readiness=true. Elapsed: 4.032649418s
Jan 13 18:05:36.165: INFO: Pod "e2e-test-nginx-rc-ft6cl" satisfied condition "running and ready"
Jan 13 18:05:36.165: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-ft6cl]
Jan 13 18:05:36.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-lzfvj'
Jan 13 18:05:36.282: INFO: stderr: ""
Jan 13 18:05:36.282: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jan 13 18:05:36.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-lzfvj'
Jan 13 18:05:36.394: INFO: stderr: ""
Jan 13 18:05:36.394: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:05:36.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lzfvj" for this suite.
Jan 13 18:05:42.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:05:42.499: INFO: namespace: e2e-tests-kubectl-lzfvj, resource: bindings, ignored listing per whitelist
Jan 13 18:05:42.523: INFO: namespace e2e-tests-kubectl-lzfvj deletion completed in 6.108054067s

• [SLOW TEST:10.700 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:05:42.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 13 18:05:42.608: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3716a96-55c9-11eb-8355-0242ac110009" in namespace "e2e-tests-downward-api-wfs92" to be "success or failure"
Jan 13 18:05:42.674: INFO: Pod "downwardapi-volume-f3716a96-55c9-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 66.195243ms
Jan 13 18:05:44.678: INFO: Pod "downwardapi-volume-f3716a96-55c9-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070571717s
Jan 13 18:05:46.682: INFO: Pod "downwardapi-volume-f3716a96-55c9-11eb-8355-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.074137096s
Jan 13 18:05:48.686: INFO: Pod "downwardapi-volume-f3716a96-55c9-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.077736307s
STEP: Saw pod success
Jan 13 18:05:48.686: INFO: Pod "downwardapi-volume-f3716a96-55c9-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:05:48.688: INFO: Trying to get logs from node hunter-control-plane pod downwardapi-volume-f3716a96-55c9-11eb-8355-0242ac110009 container client-container: 
STEP: delete the pod
Jan 13 18:05:48.750: INFO: Waiting for pod downwardapi-volume-f3716a96-55c9-11eb-8355-0242ac110009 to disappear
Jan 13 18:05:48.817: INFO: Pod downwardapi-volume-f3716a96-55c9-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:05:48.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wfs92" for this suite.
Jan 13 18:05:54.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:05:54.876: INFO: namespace: e2e-tests-downward-api-wfs92, resource: bindings, ignored listing per whitelist
Jan 13 18:05:54.923: INFO: namespace e2e-tests-downward-api-wfs92 deletion completed in 6.101968355s

• [SLOW TEST:12.401 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:05:54.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 13 18:06:05.114: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-b9lwz PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 13 18:06:05.114: INFO: >>> kubeConfig: /root/.kube/config
I0113 18:06:05.155523       6 log.go:172] (0xc0023f22c0) (0xc0011712c0) Create stream
I0113 18:06:05.155554       6 log.go:172] (0xc0023f22c0) (0xc0011712c0) Stream added, broadcasting: 1
I0113 18:06:05.158974       6 log.go:172] (0xc0023f22c0) Reply frame received for 1
I0113 18:06:05.159035       6 log.go:172] (0xc0023f22c0) (0xc000983cc0) Create stream
I0113 18:06:05.159052       6 log.go:172] (0xc0023f22c0) (0xc000983cc0) Stream added, broadcasting: 3
I0113 18:06:05.160028       6 log.go:172] (0xc0023f22c0) Reply frame received for 3
I0113 18:06:05.160108       6 log.go:172] (0xc0023f22c0) (0xc0012e1220) Create stream
I0113 18:06:05.160141       6 log.go:172] (0xc0023f22c0) (0xc0012e1220) Stream added, broadcasting: 5
I0113 18:06:05.165441       6 log.go:172] (0xc0023f22c0) Reply frame received for 5
I0113 18:06:05.253105       6 log.go:172] (0xc0023f22c0) Data frame received for 3
I0113 18:06:05.253141       6 log.go:172] (0xc000983cc0) (3) Data frame handling
I0113 18:06:05.253163       6 log.go:172] (0xc000983cc0) (3) Data frame sent
I0113 18:06:05.253239       6 log.go:172] (0xc0023f22c0) Data frame received for 3
I0113 18:06:05.253262       6 log.go:172] (0xc000983cc0) (3) Data frame handling
I0113 18:06:05.253288       6 log.go:172] (0xc0023f22c0) Data frame received for 5
I0113 18:06:05.253301       6 log.go:172] (0xc0012e1220) (5) Data frame handling
I0113 18:06:05.254888       6 log.go:172] (0xc0023f22c0) Data frame received for 1
I0113 18:06:05.254925       6 log.go:172] (0xc0011712c0) (1) Data frame handling
I0113 18:06:05.254954       6 log.go:172] (0xc0011712c0) (1) Data frame sent
I0113 18:06:05.254980       6 log.go:172] (0xc0023f22c0) (0xc0011712c0) Stream removed, broadcasting: 1
I0113 18:06:05.255019       6 log.go:172] (0xc0023f22c0) Go away received
I0113 18:06:05.255188       6 log.go:172] (0xc0023f22c0) (0xc0011712c0) Stream removed, broadcasting: 1
I0113 18:06:05.255242       6 log.go:172] (0xc0023f22c0) (0xc000983cc0) Stream removed, broadcasting: 3
I0113 18:06:05.255263       6 log.go:172] (0xc0023f22c0) (0xc0012e1220) Stream removed, broadcasting: 5
Jan 13 18:06:05.255: INFO: Exec stderr: ""
Jan 13 18:06:05.255: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-b9lwz PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 13 18:06:05.255: INFO: >>> kubeConfig: /root/.kube/config
I0113 18:06:05.287950       6 log.go:172] (0xc0023f2790) (0xc0011715e0) Create stream
I0113 18:06:05.287974       6 log.go:172] (0xc0023f2790) (0xc0011715e0) Stream added, broadcasting: 1
I0113 18:06:05.293403       6 log.go:172] (0xc0023f2790) Reply frame received for 1
I0113 18:06:05.293454       6 log.go:172] (0xc0023f2790) (0xc000983e00) Create stream
I0113 18:06:05.293466       6 log.go:172] (0xc0023f2790) (0xc000983e00) Stream added, broadcasting: 3
I0113 18:06:05.294814       6 log.go:172] (0xc0023f2790) Reply frame received for 3
I0113 18:06:05.294876       6 log.go:172] (0xc0023f2790) (0xc00183c000) Create stream
I0113 18:06:05.294893       6 log.go:172] (0xc0023f2790) (0xc00183c000) Stream added, broadcasting: 5
I0113 18:06:05.295608       6 log.go:172] (0xc0023f2790) Reply frame received for 5
I0113 18:06:05.353822       6 log.go:172] (0xc0023f2790) Data frame received for 5
I0113 18:06:05.353883       6 log.go:172] (0xc00183c000) (5) Data frame handling
I0113 18:06:05.353926       6 log.go:172] (0xc0023f2790) Data frame received for 3
I0113 18:06:05.353942       6 log.go:172] (0xc000983e00) (3) Data frame handling
I0113 18:06:05.353961       6 log.go:172] (0xc000983e00) (3) Data frame sent
I0113 18:06:05.353981       6 log.go:172] (0xc0023f2790) Data frame received for 3
I0113 18:06:05.353997       6 log.go:172] (0xc000983e00) (3) Data frame handling
I0113 18:06:05.355048       6 log.go:172] (0xc0023f2790) Data frame received for 1
I0113 18:06:05.355066       6 log.go:172] (0xc0011715e0) (1) Data frame handling
I0113 18:06:05.355076       6 log.go:172] (0xc0011715e0) (1) Data frame sent
I0113 18:06:05.355095       6 log.go:172] (0xc0023f2790) (0xc0011715e0) Stream removed, broadcasting: 1
I0113 18:06:05.355190       6 log.go:172] (0xc0023f2790) (0xc0011715e0) Stream removed, broadcasting: 1
I0113 18:06:05.355206       6 log.go:172] (0xc0023f2790) (0xc000983e00) Stream removed, broadcasting: 3
I0113 18:06:05.355374       6 log.go:172] (0xc0023f2790) Go away received
I0113 18:06:05.355407       6 log.go:172] (0xc0023f2790) (0xc00183c000) Stream removed, broadcasting: 5
Jan 13 18:06:05.355: INFO: Exec stderr: ""
Jan 13 18:06:05.355: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-b9lwz PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 13 18:06:05.355: INFO: >>> kubeConfig: /root/.kube/config
I0113 18:06:05.383353       6 log.go:172] (0xc000365130) (0xc00183c280) Create stream
I0113 18:06:05.383423       6 log.go:172] (0xc000365130) (0xc00183c280) Stream added, broadcasting: 1
I0113 18:06:05.385227       6 log.go:172] (0xc000365130) Reply frame received for 1
I0113 18:06:05.385263       6 log.go:172] (0xc000365130) (0xc0019fe000) Create stream
I0113 18:06:05.385274       6 log.go:172] (0xc000365130) (0xc0019fe000) Stream added, broadcasting: 3
I0113 18:06:05.386147       6 log.go:172] (0xc000365130) Reply frame received for 3
I0113 18:06:05.386183       6 log.go:172] (0xc000365130) (0xc0025da0a0) Create stream
I0113 18:06:05.386199       6 log.go:172] (0xc000365130) (0xc0025da0a0) Stream added, broadcasting: 5
I0113 18:06:05.387029       6 log.go:172] (0xc000365130) Reply frame received for 5
I0113 18:06:05.453851       6 log.go:172] (0xc000365130) Data frame received for 3
I0113 18:06:05.453896       6 log.go:172] (0xc0019fe000) (3) Data frame handling
I0113 18:06:05.453915       6 log.go:172] (0xc0019fe000) (3) Data frame sent
I0113 18:06:05.453936       6 log.go:172] (0xc000365130) Data frame received for 3
I0113 18:06:05.453945       6 log.go:172] (0xc0019fe000) (3) Data frame handling
I0113 18:06:05.453956       6 log.go:172] (0xc000365130) Data frame received for 5
I0113 18:06:05.453960       6 log.go:172] (0xc0025da0a0) (5) Data frame handling
I0113 18:06:05.455281       6 log.go:172] (0xc000365130) Data frame received for 1
I0113 18:06:05.455308       6 log.go:172] (0xc00183c280) (1) Data frame handling
I0113 18:06:05.455339       6 log.go:172] (0xc00183c280) (1) Data frame sent
I0113 18:06:05.455360       6 log.go:172] (0xc000365130) (0xc00183c280) Stream removed, broadcasting: 1
I0113 18:06:05.455461       6 log.go:172] (0xc000365130) Go away received
I0113 18:06:05.455516       6 log.go:172] (0xc000365130) (0xc00183c280) Stream removed, broadcasting: 1
I0113 18:06:05.455538       6 log.go:172] (0xc000365130) (0xc0019fe000) Stream removed, broadcasting: 3
I0113 18:06:05.455551       6 log.go:172] (0xc000365130) (0xc0025da0a0) Stream removed, broadcasting: 5
Jan 13 18:06:05.455: INFO: Exec stderr: ""
Jan 13 18:06:05.455: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-b9lwz PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 13 18:06:05.455: INFO: >>> kubeConfig: /root/.kube/config
I0113 18:06:05.481982       6 log.go:172] (0xc0023f22c0) (0xc0026d6140) Create stream
I0113 18:06:05.482016       6 log.go:172] (0xc0023f22c0) (0xc0026d6140) Stream added, broadcasting: 1
I0113 18:06:05.484282       6 log.go:172] (0xc0023f22c0) Reply frame received for 1
I0113 18:06:05.484314       6 log.go:172] (0xc0023f22c0) (0xc0026d61e0) Create stream
I0113 18:06:05.484327       6 log.go:172] (0xc0023f22c0) (0xc0026d61e0) Stream added, broadcasting: 3
I0113 18:06:05.485091       6 log.go:172] (0xc0023f22c0) Reply frame received for 3
I0113 18:06:05.485129       6 log.go:172] (0xc0023f22c0) (0xc0019fe140) Create stream
I0113 18:06:05.485145       6 log.go:172] (0xc0023f22c0) (0xc0019fe140) Stream added, broadcasting: 5
I0113 18:06:05.485878       6 log.go:172] (0xc0023f22c0) Reply frame received for 5
I0113 18:06:05.565811       6 log.go:172] (0xc0023f22c0) Data frame received for 5
I0113 18:06:05.565841       6 log.go:172] (0xc0019fe140) (5) Data frame handling
I0113 18:06:05.565873       6 log.go:172] (0xc0023f22c0) Data frame received for 3
I0113 18:06:05.565895       6 log.go:172] (0xc0026d61e0) (3) Data frame handling
I0113 18:06:05.565904       6 log.go:172] (0xc0026d61e0) (3) Data frame sent
I0113 18:06:05.565914       6 log.go:172] (0xc0023f22c0) Data frame received for 3
I0113 18:06:05.565920       6 log.go:172] (0xc0026d61e0) (3) Data frame handling
I0113 18:06:05.566944       6 log.go:172] (0xc0023f22c0) Data frame received for 1
I0113 18:06:05.566958       6 log.go:172] (0xc0026d6140) (1) Data frame handling
I0113 18:06:05.566965       6 log.go:172] (0xc0026d6140) (1) Data frame sent
I0113 18:06:05.566986       6 log.go:172] (0xc0023f22c0) (0xc0026d6140) Stream removed, broadcasting: 1
I0113 18:06:05.567000       6 log.go:172] (0xc0023f22c0) Go away received
I0113 18:06:05.567154       6 log.go:172] (0xc0023f22c0) (0xc0026d6140) Stream removed, broadcasting: 1
I0113 18:06:05.567177       6 log.go:172] (0xc0023f22c0) (0xc0026d61e0) Stream removed, broadcasting: 3
I0113 18:06:05.567190       6 log.go:172] (0xc0023f22c0) (0xc0019fe140) Stream removed, broadcasting: 5
Jan 13 18:06:05.567: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 13 18:06:05.567: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-b9lwz PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 13 18:06:05.567: INFO: >>> kubeConfig: /root/.kube/config
I0113 18:06:05.598363       6 log.go:172] (0xc001a7e4d0) (0xc0025da320) Create stream
I0113 18:06:05.598403       6 log.go:172] (0xc001a7e4d0) (0xc0025da320) Stream added, broadcasting: 1
I0113 18:06:05.600995       6 log.go:172] (0xc001a7e4d0) Reply frame received for 1
I0113 18:06:05.601036       6 log.go:172] (0xc001a7e4d0) (0xc0019fe1e0) Create stream
I0113 18:06:05.601053       6 log.go:172] (0xc001a7e4d0) (0xc0019fe1e0) Stream added, broadcasting: 3
I0113 18:06:05.601818       6 log.go:172] (0xc001a7e4d0) Reply frame received for 3
I0113 18:06:05.601854       6 log.go:172] (0xc001a7e4d0) (0xc001a8a000) Create stream
I0113 18:06:05.601867       6 log.go:172] (0xc001a7e4d0) (0xc001a8a000) Stream added, broadcasting: 5
I0113 18:06:05.602552       6 log.go:172] (0xc001a7e4d0) Reply frame received for 5
I0113 18:06:05.660387       6 log.go:172] (0xc001a7e4d0) Data frame received for 5
I0113 18:06:05.660426       6 log.go:172] (0xc001a8a000) (5) Data frame handling
I0113 18:06:05.660535       6 log.go:172] (0xc001a7e4d0) Data frame received for 3
I0113 18:06:05.660551       6 log.go:172] (0xc0019fe1e0) (3) Data frame handling
I0113 18:06:05.660565       6 log.go:172] (0xc0019fe1e0) (3) Data frame sent
I0113 18:06:05.660790       6 log.go:172] (0xc001a7e4d0) Data frame received for 3
I0113 18:06:05.660813       6 log.go:172] (0xc0019fe1e0) (3) Data frame handling
I0113 18:06:05.665974       6 log.go:172] (0xc001a7e4d0) Data frame received for 1
I0113 18:06:05.666006       6 log.go:172] (0xc0025da320) (1) Data frame handling
I0113 18:06:05.666026       6 log.go:172] (0xc0025da320) (1) Data frame sent
I0113 18:06:05.666045       6 log.go:172] (0xc001a7e4d0) (0xc0025da320) Stream removed, broadcasting: 1
I0113 18:06:05.666070       6 log.go:172] (0xc001a7e4d0) Go away received
I0113 18:06:05.666135       6 log.go:172] (0xc001a7e4d0) (0xc0025da320) Stream removed, broadcasting: 1
I0113 18:06:05.666148       6 log.go:172] (0xc001a7e4d0) (0xc0019fe1e0) Stream removed, broadcasting: 3
I0113 18:06:05.666156       6 log.go:172] (0xc001a7e4d0) (0xc001a8a000) Stream removed, broadcasting: 5
Jan 13 18:06:05.666: INFO: Exec stderr: ""
Jan 13 18:06:05.666: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-b9lwz PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 13 18:06:05.666: INFO: >>> kubeConfig: /root/.kube/config
I0113 18:06:05.692644       6 log.go:172] (0xc0023f2840) (0xc0026d6500) Create stream
I0113 18:06:05.692673       6 log.go:172] (0xc0023f2840) (0xc0026d6500) Stream added, broadcasting: 1
I0113 18:06:05.694076       6 log.go:172] (0xc0023f2840) Reply frame received for 1
I0113 18:06:05.694115       6 log.go:172] (0xc0023f2840) (0xc0026d65a0) Create stream
I0113 18:06:05.694125       6 log.go:172] (0xc0023f2840) (0xc0026d65a0) Stream added, broadcasting: 3
I0113 18:06:05.694908       6 log.go:172] (0xc0023f2840) Reply frame received for 3
I0113 18:06:05.694955       6 log.go:172] (0xc0023f2840) (0xc0019fe280) Create stream
I0113 18:06:05.694970       6 log.go:172] (0xc0023f2840) (0xc0019fe280) Stream added, broadcasting: 5
I0113 18:06:05.695697       6 log.go:172] (0xc0023f2840) Reply frame received for 5
I0113 18:06:05.756081       6 log.go:172] (0xc0023f2840) Data frame received for 3
I0113 18:06:05.756117       6 log.go:172] (0xc0026d65a0) (3) Data frame handling
I0113 18:06:05.756131       6 log.go:172] (0xc0026d65a0) (3) Data frame sent
I0113 18:06:05.756195       6 log.go:172] (0xc0023f2840) Data frame received for 3
I0113 18:06:05.756209       6 log.go:172] (0xc0026d65a0) (3) Data frame handling
I0113 18:06:05.756387       6 log.go:172] (0xc0023f2840) Data frame received for 5
I0113 18:06:05.756397       6 log.go:172] (0xc0019fe280) (5) Data frame handling
I0113 18:06:05.757955       6 log.go:172] (0xc0023f2840) Data frame received for 1
I0113 18:06:05.757984       6 log.go:172] (0xc0026d6500) (1) Data frame handling
I0113 18:06:05.758009       6 log.go:172] (0xc0026d6500) (1) Data frame sent
I0113 18:06:05.758035       6 log.go:172] (0xc0023f2840) (0xc0026d6500) Stream removed, broadcasting: 1
I0113 18:06:05.758081       6 log.go:172] (0xc0023f2840) Go away received
I0113 18:06:05.758191       6 log.go:172] (0xc0023f2840) (0xc0026d6500) Stream removed, broadcasting: 1
I0113 18:06:05.758239       6 log.go:172] (0xc0023f2840) (0xc0026d65a0) Stream removed, broadcasting: 3
I0113 18:06:05.758262       6 log.go:172] (0xc0023f2840) (0xc0019fe280) Stream removed, broadcasting: 5
Jan 13 18:06:05.758: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 13 18:06:05.758: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-b9lwz PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 13 18:06:05.758: INFO: >>> kubeConfig: /root/.kube/config
I0113 18:06:05.780612       6 log.go:172] (0xc0023f2d10) (0xc0026d6820) Create stream
I0113 18:06:05.780641       6 log.go:172] (0xc0023f2d10) (0xc0026d6820) Stream added, broadcasting: 1
I0113 18:06:05.782338       6 log.go:172] (0xc0023f2d10) Reply frame received for 1
I0113 18:06:05.782378       6 log.go:172] (0xc0023f2d10) (0xc0019fe320) Create stream
I0113 18:06:05.782390       6 log.go:172] (0xc0023f2d10) (0xc0019fe320) Stream added, broadcasting: 3
I0113 18:06:05.783095       6 log.go:172] (0xc0023f2d10) Reply frame received for 3
I0113 18:06:05.783140       6 log.go:172] (0xc0023f2d10) (0xc00183c320) Create stream
I0113 18:06:05.783159       6 log.go:172] (0xc0023f2d10) (0xc00183c320) Stream added, broadcasting: 5
I0113 18:06:05.784003       6 log.go:172] (0xc0023f2d10) Reply frame received for 5
I0113 18:06:05.847815       6 log.go:172] (0xc0023f2d10) Data frame received for 5
I0113 18:06:05.847848       6 log.go:172] (0xc00183c320) (5) Data frame handling
I0113 18:06:05.847879       6 log.go:172] (0xc0023f2d10) Data frame received for 3
I0113 18:06:05.847888       6 log.go:172] (0xc0019fe320) (3) Data frame handling
I0113 18:06:05.847900       6 log.go:172] (0xc0019fe320) (3) Data frame sent
I0113 18:06:05.847907       6 log.go:172] (0xc0023f2d10) Data frame received for 3
I0113 18:06:05.847913       6 log.go:172] (0xc0019fe320) (3) Data frame handling
I0113 18:06:05.849241       6 log.go:172] (0xc0023f2d10) Data frame received for 1
I0113 18:06:05.849287       6 log.go:172] (0xc0026d6820) (1) Data frame handling
I0113 18:06:05.849304       6 log.go:172] (0xc0026d6820) (1) Data frame sent
I0113 18:06:05.849320       6 log.go:172] (0xc0023f2d10) (0xc0026d6820) Stream removed, broadcasting: 1
I0113 18:06:05.849334       6 log.go:172] (0xc0023f2d10) Go away received
I0113 18:06:05.849467       6 log.go:172] (0xc0023f2d10) (0xc0026d6820) Stream removed, broadcasting: 1
I0113 18:06:05.849491       6 log.go:172] (0xc0023f2d10) (0xc0019fe320) Stream removed, broadcasting: 3
I0113 18:06:05.849500       6 log.go:172] (0xc0023f2d10) (0xc00183c320) Stream removed, broadcasting: 5
Jan 13 18:06:05.849: INFO: Exec stderr: ""
Jan 13 18:06:05.849: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-b9lwz PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 13 18:06:05.849: INFO: >>> kubeConfig: /root/.kube/config
I0113 18:06:05.884310       6 log.go:172] (0xc0011fa580) (0xc0019fe5a0) Create stream
I0113 18:06:05.884350       6 log.go:172] (0xc0011fa580) (0xc0019fe5a0) Stream added, broadcasting: 1
I0113 18:06:05.886776       6 log.go:172] (0xc0011fa580) Reply frame received for 1
I0113 18:06:05.886814       6 log.go:172] (0xc0011fa580) (0xc001a8a0a0) Create stream
I0113 18:06:05.886824       6 log.go:172] (0xc0011fa580) (0xc001a8a0a0) Stream added, broadcasting: 3
I0113 18:06:05.887481       6 log.go:172] (0xc0011fa580) Reply frame received for 3
I0113 18:06:05.887521       6 log.go:172] (0xc0011fa580) (0xc0026d68c0) Create stream
I0113 18:06:05.887538       6 log.go:172] (0xc0011fa580) (0xc0026d68c0) Stream added, broadcasting: 5
I0113 18:06:05.888229       6 log.go:172] (0xc0011fa580) Reply frame received for 5
I0113 18:06:05.954759       6 log.go:172] (0xc0011fa580) Data frame received for 5
I0113 18:06:05.954811       6 log.go:172] (0xc0026d68c0) (5) Data frame handling
I0113 18:06:05.954848       6 log.go:172] (0xc0011fa580) Data frame received for 3
I0113 18:06:05.954867       6 log.go:172] (0xc001a8a0a0) (3) Data frame handling
I0113 18:06:05.954897       6 log.go:172] (0xc001a8a0a0) (3) Data frame sent
I0113 18:06:05.954919       6 log.go:172] (0xc0011fa580) Data frame received for 3
I0113 18:06:05.954939       6 log.go:172] (0xc001a8a0a0) (3) Data frame handling
I0113 18:06:05.955685       6 log.go:172] (0xc0011fa580) Data frame received for 1
I0113 18:06:05.955727       6 log.go:172] (0xc0019fe5a0) (1) Data frame handling
I0113 18:06:05.955764       6 log.go:172] (0xc0019fe5a0) (1) Data frame sent
I0113 18:06:05.955787       6 log.go:172] (0xc0011fa580) (0xc0019fe5a0) Stream removed, broadcasting: 1
I0113 18:06:05.955815       6 log.go:172] (0xc0011fa580) Go away received
I0113 18:06:05.955981       6 log.go:172] (0xc0011fa580) (0xc0019fe5a0) Stream removed, broadcasting: 1
I0113 18:06:05.956012       6 log.go:172] (0xc0011fa580) (0xc001a8a0a0) Stream removed, broadcasting: 3
I0113 18:06:05.956024       6 log.go:172] (0xc0011fa580) (0xc0026d68c0) Stream removed, broadcasting: 5
Jan 13 18:06:05.956: INFO: Exec stderr: ""
Jan 13 18:06:05.956: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-b9lwz PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 13 18:06:05.956: INFO: >>> kubeConfig: /root/.kube/config
I0113 18:06:05.994546       6 log.go:172] (0xc001db22c0) (0xc001a8a320) Create stream
I0113 18:06:05.994588       6 log.go:172] (0xc001db22c0) (0xc001a8a320) Stream added, broadcasting: 1
I0113 18:06:05.996257       6 log.go:172] (0xc001db22c0) Reply frame received for 1
I0113 18:06:05.996294       6 log.go:172] (0xc001db22c0) (0xc001a8a3c0) Create stream
I0113 18:06:05.996303       6 log.go:172] (0xc001db22c0) (0xc001a8a3c0) Stream added, broadcasting: 3
I0113 18:06:05.997251       6 log.go:172] (0xc001db22c0) Reply frame received for 3
I0113 18:06:05.997295       6 log.go:172] (0xc001db22c0) (0xc001a8a460) Create stream
I0113 18:06:05.997311       6 log.go:172] (0xc001db22c0) (0xc001a8a460) Stream added, broadcasting: 5
I0113 18:06:05.998206       6 log.go:172] (0xc001db22c0) Reply frame received for 5
I0113 18:06:06.067053       6 log.go:172] (0xc001db22c0) Data frame received for 3
I0113 18:06:06.067117       6 log.go:172] (0xc001a8a3c0) (3) Data frame handling
I0113 18:06:06.067157       6 log.go:172] (0xc001a8a3c0) (3) Data frame sent
I0113 18:06:06.067181       6 log.go:172] (0xc001db22c0) Data frame received for 3
I0113 18:06:06.067205       6 log.go:172] (0xc001a8a3c0) (3) Data frame handling
I0113 18:06:06.067246       6 log.go:172] (0xc001db22c0) Data frame received for 5
I0113 18:06:06.067316       6 log.go:172] (0xc001a8a460) (5) Data frame handling
I0113 18:06:06.068971       6 log.go:172] (0xc001db22c0) Data frame received for 1
I0113 18:06:06.069013       6 log.go:172] (0xc001a8a320) (1) Data frame handling
I0113 18:06:06.069043       6 log.go:172] (0xc001a8a320) (1) Data frame sent
I0113 18:06:06.069133       6 log.go:172] (0xc001db22c0) (0xc001a8a320) Stream removed, broadcasting: 1
I0113 18:06:06.069187       6 log.go:172] (0xc001db22c0) Go away received
I0113 18:06:06.069297       6 log.go:172] (0xc001db22c0) (0xc001a8a320) Stream removed, broadcasting: 1
I0113 18:06:06.069324       6 log.go:172] (0xc001db22c0) (0xc001a8a3c0) Stream removed, broadcasting: 3
I0113 18:06:06.069345       6 log.go:172] (0xc001db22c0) (0xc001a8a460) Stream removed, broadcasting: 5
Jan 13 18:06:06.069: INFO: Exec stderr: ""
Jan 13 18:06:06.069: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-b9lwz PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 13 18:06:06.069: INFO: >>> kubeConfig: /root/.kube/config
I0113 18:06:06.101189       6 log.go:172] (0xc0023f31e0) (0xc0026d6b40) Create stream
I0113 18:06:06.101213       6 log.go:172] (0xc0023f31e0) (0xc0026d6b40) Stream added, broadcasting: 1
I0113 18:06:06.102968       6 log.go:172] (0xc0023f31e0) Reply frame received for 1
I0113 18:06:06.103005       6 log.go:172] (0xc0023f31e0) (0xc001a8a5a0) Create stream
I0113 18:06:06.103021       6 log.go:172] (0xc0023f31e0) (0xc001a8a5a0) Stream added, broadcasting: 3
I0113 18:06:06.103760       6 log.go:172] (0xc0023f31e0) Reply frame received for 3
I0113 18:06:06.103785       6 log.go:172] (0xc0023f31e0) (0xc0019fe640) Create stream
I0113 18:06:06.103795       6 log.go:172] (0xc0023f31e0) (0xc0019fe640) Stream added, broadcasting: 5
I0113 18:06:06.104460       6 log.go:172] (0xc0023f31e0) Reply frame received for 5
I0113 18:06:06.177803       6 log.go:172] (0xc0023f31e0) Data frame received for 5
I0113 18:06:06.177845       6 log.go:172] (0xc0019fe640) (5) Data frame handling
I0113 18:06:06.177875       6 log.go:172] (0xc0023f31e0) Data frame received for 3
I0113 18:06:06.177889       6 log.go:172] (0xc001a8a5a0) (3) Data frame handling
I0113 18:06:06.177902       6 log.go:172] (0xc001a8a5a0) (3) Data frame sent
I0113 18:06:06.177921       6 log.go:172] (0xc0023f31e0) Data frame received for 3
I0113 18:06:06.177933       6 log.go:172] (0xc001a8a5a0) (3) Data frame handling
I0113 18:06:06.179227       6 log.go:172] (0xc0023f31e0) Data frame received for 1
I0113 18:06:06.179259       6 log.go:172] (0xc0026d6b40) (1) Data frame handling
I0113 18:06:06.179285       6 log.go:172] (0xc0026d6b40) (1) Data frame sent
I0113 18:06:06.179303       6 log.go:172] (0xc0023f31e0) (0xc0026d6b40) Stream removed, broadcasting: 1
I0113 18:06:06.179322       6 log.go:172] (0xc0023f31e0) Go away received
I0113 18:06:06.179472       6 log.go:172] (0xc0023f31e0) (0xc0026d6b40) Stream removed, broadcasting: 1
I0113 18:06:06.179498       6 log.go:172] (0xc0023f31e0) (0xc001a8a5a0) Stream removed, broadcasting: 3
I0113 18:06:06.179515       6 log.go:172] (0xc0023f31e0) (0xc0019fe640) Stream removed, broadcasting: 5
Jan 13 18:06:06.179: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:06:06.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-b9lwz" for this suite.
Jan 13 18:06:52.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:06:52.327: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-b9lwz, resource: bindings, ignored listing per whitelist
Jan 13 18:06:52.332: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-b9lwz deletion completed in 46.148859021s

• [SLOW TEST:57.408 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:06:52.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-1d158778-55ca-11eb-8355-0242ac110009
STEP: Creating secret with name s-test-opt-upd-1d1587c2-55ca-11eb-8355-0242ac110009
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-1d158778-55ca-11eb-8355-0242ac110009
STEP: Updating secret s-test-opt-upd-1d1587c2-55ca-11eb-8355-0242ac110009
STEP: Creating secret with name s-test-opt-create-1d1587d7-55ca-11eb-8355-0242ac110009
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:07:00.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-65bbd" for this suite.
Jan 13 18:07:20.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:07:20.747: INFO: namespace: e2e-tests-projected-65bbd, resource: bindings, ignored listing per whitelist
Jan 13 18:07:20.769: INFO: namespace e2e-tests-projected-65bbd deletion completed in 20.149139619s

• [SLOW TEST:28.437 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:07:20.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0113 18:07:30.937117       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 13 18:07:30.937: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:07:30.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-9zkvq" for this suite.
Jan 13 18:07:36.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:07:37.020: INFO: namespace: e2e-tests-gc-9zkvq, resource: bindings, ignored listing per whitelist
Jan 13 18:07:37.063: INFO: namespace e2e-tests-gc-9zkvq deletion completed in 6.122939776s

• [SLOW TEST:16.294 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:07:37.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 13 18:07:37.995: INFO: Pod name wrapped-volume-race-382fd8de-55ca-11eb-8355-0242ac110009: Found 0 pods out of 5
Jan 13 18:07:43.002: INFO: Pod name wrapped-volume-race-382fd8de-55ca-11eb-8355-0242ac110009: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-382fd8de-55ca-11eb-8355-0242ac110009 in namespace e2e-tests-emptydir-wrapper-k2tjd, will wait for the garbage collector to delete the pods
Jan 13 18:10:19.085: INFO: Deleting ReplicationController wrapped-volume-race-382fd8de-55ca-11eb-8355-0242ac110009 took: 7.307862ms
Jan 13 18:10:19.185: INFO: Terminating ReplicationController wrapped-volume-race-382fd8de-55ca-11eb-8355-0242ac110009 pods took: 100.192591ms
STEP: Creating RC which spawns configmap-volume pods
Jan 13 18:10:59.143: INFO: Pod name wrapped-volume-race-b01511af-55ca-11eb-8355-0242ac110009: Found 0 pods out of 5
Jan 13 18:11:04.152: INFO: Pod name wrapped-volume-race-b01511af-55ca-11eb-8355-0242ac110009: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-b01511af-55ca-11eb-8355-0242ac110009 in namespace e2e-tests-emptydir-wrapper-k2tjd, will wait for the garbage collector to delete the pods
Jan 13 18:13:52.259: INFO: Deleting ReplicationController wrapped-volume-race-b01511af-55ca-11eb-8355-0242ac110009 took: 7.424386ms
Jan 13 18:13:52.359: INFO: Terminating ReplicationController wrapped-volume-race-b01511af-55ca-11eb-8355-0242ac110009 pods took: 100.218827ms
STEP: Creating RC which spawns configmap-volume pods
Jan 13 18:14:39.207: INFO: Pod name wrapped-volume-race-3341c390-55cb-11eb-8355-0242ac110009: Found 0 pods out of 5
Jan 13 18:14:44.216: INFO: Pod name wrapped-volume-race-3341c390-55cb-11eb-8355-0242ac110009: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-3341c390-55cb-11eb-8355-0242ac110009 in namespace e2e-tests-emptydir-wrapper-k2tjd, will wait for the garbage collector to delete the pods
Jan 13 18:17:32.514: INFO: Deleting ReplicationController wrapped-volume-race-3341c390-55cb-11eb-8355-0242ac110009 took: 43.817205ms
Jan 13 18:17:32.614: INFO: Terminating ReplicationController wrapped-volume-race-3341c390-55cb-11eb-8355-0242ac110009 pods took: 100.224589ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:18:11.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-k2tjd" for this suite.
Jan 13 18:18:19.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:18:19.288: INFO: namespace: e2e-tests-emptydir-wrapper-k2tjd, resource: bindings, ignored listing per whitelist
Jan 13 18:18:19.356: INFO: namespace e2e-tests-emptydir-wrapper-k2tjd deletion completed in 8.114256812s

• [SLOW TEST:642.292 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:18:19.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 13 18:18:19.480: INFO: Waiting up to 5m0s for pod "pod-b68eddf1-55cb-11eb-8355-0242ac110009" in namespace "e2e-tests-emptydir-k9r2k" to be "success or failure"
Jan 13 18:18:19.495: INFO: Pod "pod-b68eddf1-55cb-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 14.643453ms
Jan 13 18:18:21.499: INFO: Pod "pod-b68eddf1-55cb-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018860063s
Jan 13 18:18:23.503: INFO: Pod "pod-b68eddf1-55cb-11eb-8355-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.022964706s
Jan 13 18:18:25.508: INFO: Pod "pod-b68eddf1-55cb-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027303267s
STEP: Saw pod success
Jan 13 18:18:25.508: INFO: Pod "pod-b68eddf1-55cb-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:18:25.511: INFO: Trying to get logs from node hunter-control-plane pod pod-b68eddf1-55cb-11eb-8355-0242ac110009 container test-container: 
STEP: delete the pod
Jan 13 18:18:25.549: INFO: Waiting for pod pod-b68eddf1-55cb-11eb-8355-0242ac110009 to disappear
Jan 13 18:18:25.554: INFO: Pod pod-b68eddf1-55cb-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:18:25.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-k9r2k" for this suite.
Jan 13 18:18:31.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:18:31.671: INFO: namespace: e2e-tests-emptydir-k9r2k, resource: bindings, ignored listing per whitelist
Jan 13 18:18:31.694: INFO: namespace e2e-tests-emptydir-k9r2k deletion completed in 6.135703962s

• [SLOW TEST:12.337 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:18:31.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 13 18:18:31.932: INFO: Number of nodes with available pods: 0
Jan 13 18:18:31.932: INFO: Node hunter-control-plane is running more than one daemon pod
Jan 13 18:18:33.018: INFO: Number of nodes with available pods: 0
Jan 13 18:18:33.018: INFO: Node hunter-control-plane is running more than one daemon pod
Jan 13 18:18:33.940: INFO: Number of nodes with available pods: 0
Jan 13 18:18:33.940: INFO: Node hunter-control-plane is running more than one daemon pod
Jan 13 18:18:34.939: INFO: Number of nodes with available pods: 0
Jan 13 18:18:34.939: INFO: Node hunter-control-plane is running more than one daemon pod
Jan 13 18:18:35.940: INFO: Number of nodes with available pods: 1
Jan 13 18:18:35.940: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 13 18:18:36.022: INFO: Number of nodes with available pods: 1
Jan 13 18:18:36.022: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-4hdq9, will wait for the garbage collector to delete the pods
Jan 13 18:18:37.173: INFO: Deleting DaemonSet.extensions daemon-set took: 33.307837ms
Jan 13 18:18:37.473: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.279273ms
Jan 13 18:18:39.976: INFO: Number of nodes with available pods: 0
Jan 13 18:18:39.976: INFO: Number of running nodes: 0, number of available pods: 0
Jan 13 18:18:39.979: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4hdq9/daemonsets","resourceVersion":"494382"},"items":null}

Jan 13 18:18:40.011: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4hdq9/pods","resourceVersion":"494382"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:18:40.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-4hdq9" for this suite.
Jan 13 18:18:46.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:18:46.067: INFO: namespace: e2e-tests-daemonsets-4hdq9, resource: bindings, ignored listing per whitelist
Jan 13 18:18:46.164: INFO: namespace e2e-tests-daemonsets-4hdq9 deletion completed in 6.143109545s

• [SLOW TEST:14.471 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:18:46.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jan 13 18:18:46.342: INFO: Waiting up to 5m0s for pod "client-containers-c6943e1f-55cb-11eb-8355-0242ac110009" in namespace "e2e-tests-containers-fsj56" to be "success or failure"
Jan 13 18:18:46.345: INFO: Pod "client-containers-c6943e1f-55cb-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.136248ms
Jan 13 18:18:48.349: INFO: Pod "client-containers-c6943e1f-55cb-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006771736s
Jan 13 18:18:50.952: INFO: Pod "client-containers-c6943e1f-55cb-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.609980382s
STEP: Saw pod success
Jan 13 18:18:50.952: INFO: Pod "client-containers-c6943e1f-55cb-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:18:50.955: INFO: Trying to get logs from node hunter-control-plane pod client-containers-c6943e1f-55cb-11eb-8355-0242ac110009 container test-container: 
STEP: delete the pod
Jan 13 18:18:51.112: INFO: Waiting for pod client-containers-c6943e1f-55cb-11eb-8355-0242ac110009 to disappear
Jan 13 18:18:51.124: INFO: Pod client-containers-c6943e1f-55cb-11eb-8355-0242ac110009 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:18:51.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-fsj56" for this suite.
Jan 13 18:18:57.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:18:57.184: INFO: namespace: e2e-tests-containers-fsj56, resource: bindings, ignored listing per whitelist
Jan 13 18:18:57.264: INFO: namespace e2e-tests-containers-fsj56 deletion completed in 6.135911201s

• [SLOW TEST:11.100 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:18:57.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 13 18:18:57.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-5dm9p'
Jan 13 18:19:01.613: INFO: stderr: ""
Jan 13 18:19:01.613: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Jan 13 18:19:01.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-5dm9p'
Jan 13 18:19:09.023: INFO: stderr: ""
Jan 13 18:19:09.023: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:19:09.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5dm9p" for this suite.
Jan 13 18:19:15.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:19:15.089: INFO: namespace: e2e-tests-kubectl-5dm9p, resource: bindings, ignored listing per whitelist
Jan 13 18:19:15.170: INFO: namespace e2e-tests-kubectl-5dm9p deletion completed in 6.142174481s

• [SLOW TEST:17.905 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:19:15.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 13 18:19:15.302: INFO: PodSpec: initContainers in spec.initContainers
Jan 13 18:20:08.711: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d7d9c7f7-55cb-11eb-8355-0242ac110009", GenerateName:"", Namespace:"e2e-tests-init-container-c6qgb", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-c6qgb/pods/pod-init-d7d9c7f7-55cb-11eb-8355-0242ac110009", UID:"d7db9b6a-55cb-11eb-9c75-0242ac12000b", ResourceVersion:"494630", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63746158755, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"302817833"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-jgmxn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0024e1900), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jgmxn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jgmxn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jgmxn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002bbb408), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-control-plane", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001dd3320), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002bbb480)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002bbb4a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002bbb4a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002bbb4ac)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746158755, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746158755, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746158755, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746158755, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.11", PodIP:"10.244.0.174", StartTime:(*v1.Time)(0xc002bb9fa0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00182ea10)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00182ea80)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://4e0f76055377462dd2f010ca8418f250b63b164332d340be221ad58816d15968"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bb9fe0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bb9fc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:20:08.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-c6qgb" for this suite.
Jan 13 18:20:30.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:20:30.828: INFO: namespace: e2e-tests-init-container-c6qgb, resource: bindings, ignored listing per whitelist
Jan 13 18:20:30.886: INFO: namespace e2e-tests-init-container-c6qgb deletion completed in 22.153083166s

• [SLOW TEST:75.716 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:20:30.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-04fb446b-55cc-11eb-8355-0242ac110009
STEP: Creating a pod to test consume secrets
Jan 13 18:20:31.032: INFO: Waiting up to 5m0s for pod "pod-secrets-04fbb77f-55cc-11eb-8355-0242ac110009" in namespace "e2e-tests-secrets-k8l5m" to be "success or failure"
Jan 13 18:20:31.036: INFO: Pod "pod-secrets-04fbb77f-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036537ms
Jan 13 18:20:33.040: INFO: Pod "pod-secrets-04fbb77f-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008076991s
Jan 13 18:20:35.045: INFO: Pod "pod-secrets-04fbb77f-55cc-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012973571s
STEP: Saw pod success
Jan 13 18:20:35.045: INFO: Pod "pod-secrets-04fbb77f-55cc-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:20:35.048: INFO: Trying to get logs from node hunter-control-plane pod pod-secrets-04fbb77f-55cc-11eb-8355-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Jan 13 18:20:35.074: INFO: Waiting for pod pod-secrets-04fbb77f-55cc-11eb-8355-0242ac110009 to disappear
Jan 13 18:20:35.105: INFO: Pod pod-secrets-04fbb77f-55cc-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:20:35.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-k8l5m" for this suite.
Jan 13 18:20:41.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:20:41.195: INFO: namespace: e2e-tests-secrets-k8l5m, resource: bindings, ignored listing per whitelist
Jan 13 18:20:41.262: INFO: namespace e2e-tests-secrets-k8l5m deletion completed in 6.153545165s

• [SLOW TEST:10.376 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:20:41.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-0b22829e-55cc-11eb-8355-0242ac110009
STEP: Creating a pod to test consume secrets
Jan 13 18:20:41.359: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0b23cb30-55cc-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-vxjp6" to be "success or failure"
Jan 13 18:20:41.381: INFO: Pod "pod-projected-secrets-0b23cb30-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 21.360486ms
Jan 13 18:20:43.386: INFO: Pod "pod-projected-secrets-0b23cb30-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026870581s
Jan 13 18:20:45.390: INFO: Pod "pod-projected-secrets-0b23cb30-55cc-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030895219s
STEP: Saw pod success
Jan 13 18:20:45.390: INFO: Pod "pod-projected-secrets-0b23cb30-55cc-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:20:45.393: INFO: Trying to get logs from node hunter-control-plane pod pod-projected-secrets-0b23cb30-55cc-11eb-8355-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Jan 13 18:20:45.430: INFO: Waiting for pod pod-projected-secrets-0b23cb30-55cc-11eb-8355-0242ac110009 to disappear
Jan 13 18:20:45.455: INFO: Pod pod-projected-secrets-0b23cb30-55cc-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:20:45.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vxjp6" for this suite.
Jan 13 18:20:51.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:20:51.559: INFO: namespace: e2e-tests-projected-vxjp6, resource: bindings, ignored listing per whitelist
Jan 13 18:20:51.568: INFO: namespace e2e-tests-projected-vxjp6 deletion completed in 6.110671147s

• [SLOW TEST:10.306 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:20:51.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 13 18:20:51.734: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11516100-55cc-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-cpltx" to be "success or failure"
Jan 13 18:20:51.738: INFO: Pod "downwardapi-volume-11516100-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.307713ms
Jan 13 18:20:53.801: INFO: Pod "downwardapi-volume-11516100-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066397454s
Jan 13 18:20:55.805: INFO: Pod "downwardapi-volume-11516100-55cc-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070599063s
STEP: Saw pod success
Jan 13 18:20:55.805: INFO: Pod "downwardapi-volume-11516100-55cc-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:20:55.808: INFO: Trying to get logs from node hunter-control-plane pod downwardapi-volume-11516100-55cc-11eb-8355-0242ac110009 container client-container: 
STEP: delete the pod
Jan 13 18:20:55.868: INFO: Waiting for pod downwardapi-volume-11516100-55cc-11eb-8355-0242ac110009 to disappear
Jan 13 18:20:55.875: INFO: Pod downwardapi-volume-11516100-55cc-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:20:55.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cpltx" for this suite.
Jan 13 18:21:01.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:21:01.950: INFO: namespace: e2e-tests-projected-cpltx, resource: bindings, ignored listing per whitelist
Jan 13 18:21:01.982: INFO: namespace e2e-tests-projected-cpltx deletion completed in 6.104576609s

• [SLOW TEST:10.414 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:21:01.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 13 18:21:02.255: INFO: Waiting up to 5m0s for pod "pod-1796f7aa-55cc-11eb-8355-0242ac110009" in namespace "e2e-tests-emptydir-6pv7r" to be "success or failure"
Jan 13 18:21:02.297: INFO: Pod "pod-1796f7aa-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 41.819663ms
Jan 13 18:21:04.330: INFO: Pod "pod-1796f7aa-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074258514s
Jan 13 18:21:06.333: INFO: Pod "pod-1796f7aa-55cc-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078060395s
STEP: Saw pod success
Jan 13 18:21:06.333: INFO: Pod "pod-1796f7aa-55cc-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:21:06.336: INFO: Trying to get logs from node hunter-control-plane pod pod-1796f7aa-55cc-11eb-8355-0242ac110009 container test-container: 
STEP: delete the pod
Jan 13 18:21:06.413: INFO: Waiting for pod pod-1796f7aa-55cc-11eb-8355-0242ac110009 to disappear
Jan 13 18:21:06.509: INFO: Pod pod-1796f7aa-55cc-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:21:06.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6pv7r" for this suite.
Jan 13 18:21:12.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:21:12.659: INFO: namespace: e2e-tests-emptydir-6pv7r, resource: bindings, ignored listing per whitelist
Jan 13 18:21:12.666: INFO: namespace e2e-tests-emptydir-6pv7r deletion completed in 6.120262281s

• [SLOW TEST:10.683 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:21:12.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 13 18:21:12.803: INFO: Waiting up to 5m0s for pod "pod-1de19b90-55cc-11eb-8355-0242ac110009" in namespace "e2e-tests-emptydir-sr6cp" to be "success or failure"
Jan 13 18:21:12.807: INFO: Pod "pod-1de19b90-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.728363ms
Jan 13 18:21:14.811: INFO: Pod "pod-1de19b90-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007969011s
Jan 13 18:21:16.815: INFO: Pod "pod-1de19b90-55cc-11eb-8355-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.011165536s
Jan 13 18:21:18.818: INFO: Pod "pod-1de19b90-55cc-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014827122s
STEP: Saw pod success
Jan 13 18:21:18.818: INFO: Pod "pod-1de19b90-55cc-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:21:18.821: INFO: Trying to get logs from node hunter-control-plane pod pod-1de19b90-55cc-11eb-8355-0242ac110009 container test-container: 
STEP: delete the pod
Jan 13 18:21:18.838: INFO: Waiting for pod pod-1de19b90-55cc-11eb-8355-0242ac110009 to disappear
Jan 13 18:21:18.843: INFO: Pod pod-1de19b90-55cc-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:21:18.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-sr6cp" for this suite.
Jan 13 18:21:24.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:21:24.951: INFO: namespace: e2e-tests-emptydir-sr6cp, resource: bindings, ignored listing per whitelist
Jan 13 18:21:24.955: INFO: namespace e2e-tests-emptydir-sr6cp deletion completed in 6.109348621s

• [SLOW TEST:12.289 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:21:24.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 13 18:21:29.183: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-253afcbe-55cc-11eb-8355-0242ac110009,GenerateName:,Namespace:e2e-tests-events-4nwfl,SelfLink:/api/v1/namespaces/e2e-tests-events-4nwfl/pods/send-events-253afcbe-55cc-11eb-8355-0242ac110009,UID:2540ce7a-55cc-11eb-9c75-0242ac12000b,ResourceVersion:494914,Generation:0,CreationTimestamp:2021-01-13 18:21:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 124429809,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vqrz4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vqrz4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-vqrz4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001cab8b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001cab8d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:21:25 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:21:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:21:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:21:25 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.0.180,StartTime:2021-01-13 18:21:25 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2021-01-13 18:21:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://577d081dba2a621257c5a71b7133b83d3a898a57b38df6e0de45b808bce8e61b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan 13 18:21:31.188: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 13 18:21:33.192: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:21:33.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-4nwfl" for this suite.
Jan 13 18:22:11.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:22:11.263: INFO: namespace: e2e-tests-events-4nwfl, resource: bindings, ignored listing per whitelist
Jan 13 18:22:11.327: INFO: namespace e2e-tests-events-4nwfl deletion completed in 38.105110287s

• [SLOW TEST:46.372 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:22:11.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-40dd45e1-55cc-11eb-8355-0242ac110009
STEP: Creating a pod to test consume secrets
Jan 13 18:22:11.495: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-40dde447-55cc-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-vprfn" to be "success or failure"
Jan 13 18:22:11.498: INFO: Pod "pod-projected-secrets-40dde447-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.497336ms
Jan 13 18:22:13.502: INFO: Pod "pod-projected-secrets-40dde447-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007049857s
Jan 13 18:22:15.506: INFO: Pod "pod-projected-secrets-40dde447-55cc-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010699756s
STEP: Saw pod success
Jan 13 18:22:15.506: INFO: Pod "pod-projected-secrets-40dde447-55cc-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:22:15.509: INFO: Trying to get logs from node hunter-control-plane pod pod-projected-secrets-40dde447-55cc-11eb-8355-0242ac110009 container projected-secret-volume-test: 
STEP: delete the pod
Jan 13 18:22:15.531: INFO: Waiting for pod pod-projected-secrets-40dde447-55cc-11eb-8355-0242ac110009 to disappear
Jan 13 18:22:15.536: INFO: Pod pod-projected-secrets-40dde447-55cc-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:22:15.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vprfn" for this suite.
Jan 13 18:22:21.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:22:21.615: INFO: namespace: e2e-tests-projected-vprfn, resource: bindings, ignored listing per whitelist
Jan 13 18:22:21.638: INFO: namespace e2e-tests-projected-vprfn deletion completed in 6.098790636s

• [SLOW TEST:10.311 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:22:21.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-46f81e41-55cc-11eb-8355-0242ac110009
STEP: Creating a pod to test consume secrets
Jan 13 18:22:21.818: INFO: Waiting up to 5m0s for pod "pod-secrets-4704bdee-55cc-11eb-8355-0242ac110009" in namespace "e2e-tests-secrets-dsh6b" to be "success or failure"
Jan 13 18:22:21.838: INFO: Pod "pod-secrets-4704bdee-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 19.393055ms
Jan 13 18:22:23.905: INFO: Pod "pod-secrets-4704bdee-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086679996s
Jan 13 18:22:25.909: INFO: Pod "pod-secrets-4704bdee-55cc-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090379264s
STEP: Saw pod success
Jan 13 18:22:25.909: INFO: Pod "pod-secrets-4704bdee-55cc-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:22:25.911: INFO: Trying to get logs from node hunter-control-plane pod pod-secrets-4704bdee-55cc-11eb-8355-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Jan 13 18:22:25.927: INFO: Waiting for pod pod-secrets-4704bdee-55cc-11eb-8355-0242ac110009 to disappear
Jan 13 18:22:25.931: INFO: Pod pod-secrets-4704bdee-55cc-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:22:25.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-dsh6b" for this suite.
Jan 13 18:22:31.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:22:31.998: INFO: namespace: e2e-tests-secrets-dsh6b, resource: bindings, ignored listing per whitelist
Jan 13 18:22:32.053: INFO: namespace e2e-tests-secrets-dsh6b deletion completed in 6.118999191s
STEP: Destroying namespace "e2e-tests-secret-namespace-jnncp" for this suite.
Jan 13 18:22:38.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:22:38.109: INFO: namespace: e2e-tests-secret-namespace-jnncp, resource: bindings, ignored listing per whitelist
Jan 13 18:22:38.170: INFO: namespace e2e-tests-secret-namespace-jnncp deletion completed in 6.116771026s

• [SLOW TEST:16.531 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:22:38.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jan 13 18:22:38.837: INFO: created pod pod-service-account-defaultsa
Jan 13 18:22:38.837: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 13 18:22:38.845: INFO: created pod pod-service-account-mountsa
Jan 13 18:22:38.845: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 13 18:22:38.851: INFO: created pod pod-service-account-nomountsa
Jan 13 18:22:38.851: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 13 18:22:38.947: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 13 18:22:38.947: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 13 18:22:38.982: INFO: created pod pod-service-account-mountsa-mountspec
Jan 13 18:22:38.982: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 13 18:22:39.007: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 13 18:22:39.007: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 13 18:22:39.090: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 13 18:22:39.090: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 13 18:22:39.109: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 13 18:22:39.109: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 13 18:22:39.140: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 13 18:22:39.140: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:22:39.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-t4v8h" for this suite.
Jan 13 18:23:09.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:23:09.372: INFO: namespace: e2e-tests-svcaccounts-t4v8h, resource: bindings, ignored listing per whitelist
Jan 13 18:23:09.452: INFO: namespace e2e-tests-svcaccounts-t4v8h deletion completed in 30.265703261s

• [SLOW TEST:31.282 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:23:09.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 13 18:23:09.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:23:13.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-66zwz" for this suite.
Jan 13 18:23:51.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:23:51.828: INFO: namespace: e2e-tests-pods-66zwz, resource: bindings, ignored listing per whitelist
Jan 13 18:23:51.873: INFO: namespace e2e-tests-pods-66zwz deletion completed in 38.120417696s

• [SLOW TEST:42.421 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:23:51.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 13 18:23:56.541: INFO: Successfully updated pod "annotationupdate7cbcd9b7-55cc-11eb-8355-0242ac110009"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:24:00.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-p5c2l" for this suite.
Jan 13 18:24:22.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:24:22.637: INFO: namespace: e2e-tests-projected-p5c2l, resource: bindings, ignored listing per whitelist
Jan 13 18:24:22.704: INFO: namespace e2e-tests-projected-p5c2l deletion completed in 22.114973378s

• [SLOW TEST:30.830 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:24:22.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 13 18:24:30.904: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 13 18:24:30.972: INFO: Pod pod-with-poststart-http-hook still exists
Jan 13 18:24:32.973: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 13 18:24:32.977: INFO: Pod pod-with-poststart-http-hook still exists
Jan 13 18:24:34.972: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 13 18:24:34.976: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:24:34.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-8rtkl" for this suite.
Jan 13 18:24:56.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:24:57.019: INFO: namespace: e2e-tests-container-lifecycle-hook-8rtkl, resource: bindings, ignored listing per whitelist
Jan 13 18:24:57.088: INFO: namespace e2e-tests-container-lifecycle-hook-8rtkl deletion completed in 22.108880352s

• [SLOW TEST:34.384 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:24:57.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-a3a10e84-55cc-11eb-8355-0242ac110009
STEP: Creating a pod to test consume secrets
Jan 13 18:24:57.210: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a3a2e5c7-55cc-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-q2msd" to be "success or failure"
Jan 13 18:24:57.284: INFO: Pod "pod-projected-secrets-a3a2e5c7-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 74.491413ms
Jan 13 18:24:59.373: INFO: Pod "pod-projected-secrets-a3a2e5c7-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16301519s
Jan 13 18:25:01.377: INFO: Pod "pod-projected-secrets-a3a2e5c7-55cc-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.167451493s
STEP: Saw pod success
Jan 13 18:25:01.377: INFO: Pod "pod-projected-secrets-a3a2e5c7-55cc-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:25:01.381: INFO: Trying to get logs from node hunter-control-plane pod pod-projected-secrets-a3a2e5c7-55cc-11eb-8355-0242ac110009 container projected-secret-volume-test: 
STEP: delete the pod
Jan 13 18:25:01.428: INFO: Waiting for pod pod-projected-secrets-a3a2e5c7-55cc-11eb-8355-0242ac110009 to disappear
Jan 13 18:25:01.452: INFO: Pod pod-projected-secrets-a3a2e5c7-55cc-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:25:01.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-q2msd" for this suite.
Jan 13 18:25:07.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:25:07.491: INFO: namespace: e2e-tests-projected-q2msd, resource: bindings, ignored listing per whitelist
Jan 13 18:25:07.621: INFO: namespace e2e-tests-projected-q2msd deletion completed in 6.165660118s

• [SLOW TEST:10.533 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:25:07.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 13 18:25:07.740: INFO: Waiting up to 5m0s for pod "downward-api-a9e86e1e-55cc-11eb-8355-0242ac110009" in namespace "e2e-tests-downward-api-78wsd" to be "success or failure"
Jan 13 18:25:07.828: INFO: Pod "downward-api-a9e86e1e-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 88.257254ms
Jan 13 18:25:09.834: INFO: Pod "downward-api-a9e86e1e-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094102863s
Jan 13 18:25:11.930: INFO: Pod "downward-api-a9e86e1e-55cc-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.19059484s
STEP: Saw pod success
Jan 13 18:25:11.930: INFO: Pod "downward-api-a9e86e1e-55cc-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:25:11.933: INFO: Trying to get logs from node hunter-control-plane pod downward-api-a9e86e1e-55cc-11eb-8355-0242ac110009 container dapi-container: 
STEP: delete the pod
Jan 13 18:25:12.188: INFO: Waiting for pod downward-api-a9e86e1e-55cc-11eb-8355-0242ac110009 to disappear
Jan 13 18:25:12.301: INFO: Pod downward-api-a9e86e1e-55cc-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:25:12.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-78wsd" for this suite.
Jan 13 18:25:18.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:25:18.421: INFO: namespace: e2e-tests-downward-api-78wsd, resource: bindings, ignored listing per whitelist
Jan 13 18:25:18.432: INFO: namespace e2e-tests-downward-api-78wsd deletion completed in 6.128206531s

• [SLOW TEST:10.811 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:25:18.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0113 18:25:29.231801       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 13 18:25:29.231: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:25:29.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-2bcw2" for this suite.
Jan 13 18:25:37.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:25:37.305: INFO: namespace: e2e-tests-gc-2bcw2, resource: bindings, ignored listing per whitelist
Jan 13 18:25:37.469: INFO: namespace e2e-tests-gc-2bcw2 deletion completed in 8.234544433s

• [SLOW TEST:19.036 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:25:37.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:25:37.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-rvclr" for this suite.
Jan 13 18:25:43.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:25:43.781: INFO: namespace: e2e-tests-services-rvclr, resource: bindings, ignored listing per whitelist
Jan 13 18:25:43.789: INFO: namespace e2e-tests-services-rvclr deletion completed in 6.13459561s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.320 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:25:43.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jan 13 18:25:43.938: INFO: Waiting up to 5m0s for pod "var-expansion-bf7b9c4a-55cc-11eb-8355-0242ac110009" in namespace "e2e-tests-var-expansion-2ljfg" to be "success or failure"
Jan 13 18:25:43.948: INFO: Pod "var-expansion-bf7b9c4a-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 9.447283ms
Jan 13 18:25:45.952: INFO: Pod "var-expansion-bf7b9c4a-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013656965s
Jan 13 18:25:47.957: INFO: Pod "var-expansion-bf7b9c4a-55cc-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018223665s
STEP: Saw pod success
Jan 13 18:25:47.957: INFO: Pod "var-expansion-bf7b9c4a-55cc-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:25:47.959: INFO: Trying to get logs from node hunter-control-plane pod var-expansion-bf7b9c4a-55cc-11eb-8355-0242ac110009 container dapi-container: 
STEP: delete the pod
Jan 13 18:25:47.995: INFO: Waiting for pod var-expansion-bf7b9c4a-55cc-11eb-8355-0242ac110009 to disappear
Jan 13 18:25:48.068: INFO: Pod var-expansion-bf7b9c4a-55cc-11eb-8355-0242ac110009 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:25:48.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-2ljfg" for this suite.
Jan 13 18:25:54.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:25:54.101: INFO: namespace: e2e-tests-var-expansion-2ljfg, resource: bindings, ignored listing per whitelist
Jan 13 18:25:54.186: INFO: namespace e2e-tests-var-expansion-2ljfg deletion completed in 6.113861235s

• [SLOW TEST:10.396 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:25:54.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 13 18:25:54.296: INFO: Waiting up to 5m0s for pod "pod-c5aa8d65-55cc-11eb-8355-0242ac110009" in namespace "e2e-tests-emptydir-9kd77" to be "success or failure"
Jan 13 18:25:54.313: INFO: Pod "pod-c5aa8d65-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.616609ms
Jan 13 18:25:56.317: INFO: Pod "pod-c5aa8d65-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020653919s
Jan 13 18:25:58.321: INFO: Pod "pod-c5aa8d65-55cc-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024970456s
STEP: Saw pod success
Jan 13 18:25:58.321: INFO: Pod "pod-c5aa8d65-55cc-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:25:58.324: INFO: Trying to get logs from node hunter-control-plane pod pod-c5aa8d65-55cc-11eb-8355-0242ac110009 container test-container: 
STEP: delete the pod
Jan 13 18:25:58.381: INFO: Waiting for pod pod-c5aa8d65-55cc-11eb-8355-0242ac110009 to disappear
Jan 13 18:25:58.459: INFO: Pod pod-c5aa8d65-55cc-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:25:58.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-9kd77" for this suite.
Jan 13 18:26:04.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:26:04.552: INFO: namespace: e2e-tests-emptydir-9kd77, resource: bindings, ignored listing per whitelist
Jan 13 18:26:04.623: INFO: namespace e2e-tests-emptydir-9kd77 deletion completed in 6.160397629s

• [SLOW TEST:10.437 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:26:04.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 13 18:26:04.820: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:26:13.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-tkwsd" for this suite.
Jan 13 18:26:19.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:26:19.159: INFO: namespace: e2e-tests-init-container-tkwsd, resource: bindings, ignored listing per whitelist
Jan 13 18:26:19.217: INFO: namespace e2e-tests-init-container-tkwsd deletion completed in 6.111775321s

• [SLOW TEST:14.594 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:26:19.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jan 13 18:26:19.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 13 18:26:19.476: INFO: stderr: ""
Jan 13 18:26:19.476: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npingcap.com/v1alpha1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:26:19.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lh2qr" for this suite.
Jan 13 18:26:25.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:26:25.543: INFO: namespace: e2e-tests-kubectl-lh2qr, resource: bindings, ignored listing per whitelist
Jan 13 18:26:25.610: INFO: namespace e2e-tests-kubectl-lh2qr deletion completed in 6.130324002s

• [SLOW TEST:6.393 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:26:25.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 13 18:26:25.744: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8672459-55cc-11eb-8355-0242ac110009" in namespace "e2e-tests-downward-api-jpcf8" to be "success or failure"
Jan 13 18:26:25.746: INFO: Pod "downwardapi-volume-d8672459-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.83874ms
Jan 13 18:26:27.907: INFO: Pod "downwardapi-volume-d8672459-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163314118s
Jan 13 18:26:29.912: INFO: Pod "downwardapi-volume-d8672459-55cc-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.167975295s
STEP: Saw pod success
Jan 13 18:26:29.912: INFO: Pod "downwardapi-volume-d8672459-55cc-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:26:29.915: INFO: Trying to get logs from node hunter-control-plane pod downwardapi-volume-d8672459-55cc-11eb-8355-0242ac110009 container client-container: 
STEP: delete the pod
Jan 13 18:26:29.934: INFO: Waiting for pod downwardapi-volume-d8672459-55cc-11eb-8355-0242ac110009 to disappear
Jan 13 18:26:29.938: INFO: Pod downwardapi-volume-d8672459-55cc-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:26:29.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jpcf8" for this suite.
Jan 13 18:26:35.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:26:35.992: INFO: namespace: e2e-tests-downward-api-jpcf8, resource: bindings, ignored listing per whitelist
Jan 13 18:26:36.104: INFO: namespace e2e-tests-downward-api-jpcf8 deletion completed in 6.144228376s

• [SLOW TEST:10.494 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:26:36.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:26:40.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-gqdj6" for this suite.
Jan 13 18:26:46.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:26:46.486: INFO: namespace: e2e-tests-emptydir-wrapper-gqdj6, resource: bindings, ignored listing per whitelist
Jan 13 18:26:46.509: INFO: namespace e2e-tests-emptydir-wrapper-gqdj6 deletion completed in 6.105056438s

• [SLOW TEST:10.405 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:26:46.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 13 18:26:46.659: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e4dec79b-55cc-11eb-8355-0242ac110009" in namespace "e2e-tests-downward-api-546fv" to be "success or failure"
Jan 13 18:26:46.662: INFO: Pod "downwardapi-volume-e4dec79b-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.141899ms
Jan 13 18:26:48.746: INFO: Pod "downwardapi-volume-e4dec79b-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08674544s
Jan 13 18:26:50.750: INFO: Pod "downwardapi-volume-e4dec79b-55cc-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09068052s
STEP: Saw pod success
Jan 13 18:26:50.750: INFO: Pod "downwardapi-volume-e4dec79b-55cc-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:26:50.753: INFO: Trying to get logs from node hunter-control-plane pod downwardapi-volume-e4dec79b-55cc-11eb-8355-0242ac110009 container client-container: 
STEP: delete the pod
Jan 13 18:26:50.777: INFO: Waiting for pod downwardapi-volume-e4dec79b-55cc-11eb-8355-0242ac110009 to disappear
Jan 13 18:26:50.782: INFO: Pod downwardapi-volume-e4dec79b-55cc-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:26:50.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-546fv" for this suite.
Jan 13 18:26:56.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:26:56.922: INFO: namespace: e2e-tests-downward-api-546fv, resource: bindings, ignored listing per whitelist
Jan 13 18:26:56.950: INFO: namespace e2e-tests-downward-api-546fv deletion completed in 6.165351078s

• [SLOW TEST:10.441 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:26:56.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 13 18:26:57.067: INFO: Waiting up to 5m0s for pod "pod-eb132e82-55cc-11eb-8355-0242ac110009" in namespace "e2e-tests-emptydir-kv2nb" to be "success or failure"
Jan 13 18:26:57.070: INFO: Pod "pod-eb132e82-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.386537ms
Jan 13 18:26:59.074: INFO: Pod "pod-eb132e82-55cc-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006895364s
Jan 13 18:27:01.078: INFO: Pod "pod-eb132e82-55cc-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010681194s
STEP: Saw pod success
Jan 13 18:27:01.078: INFO: Pod "pod-eb132e82-55cc-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:27:01.080: INFO: Trying to get logs from node hunter-control-plane pod pod-eb132e82-55cc-11eb-8355-0242ac110009 container test-container: 
STEP: delete the pod
Jan 13 18:27:01.123: INFO: Waiting for pod pod-eb132e82-55cc-11eb-8355-0242ac110009 to disappear
Jan 13 18:27:01.137: INFO: Pod pod-eb132e82-55cc-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:27:01.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-kv2nb" for this suite.
Jan 13 18:27:07.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:27:07.250: INFO: namespace: e2e-tests-emptydir-kv2nb, resource: bindings, ignored listing per whitelist
Jan 13 18:27:07.273: INFO: namespace e2e-tests-emptydir-kv2nb deletion completed in 6.133387839s

• [SLOW TEST:10.323 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:27:07.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-jwf64
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Jan 13 18:27:07.466: INFO: Found 0 stateful pods, waiting for 3
Jan 13 18:27:17.471: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 13 18:27:17.471: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 13 18:27:17.471: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 13 18:27:27.470: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 13 18:27:27.470: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 13 18:27:27.470: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 13 18:27:27.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwf64 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 13 18:27:27.723: INFO: stderr: "I0113 18:27:27.613163    2145 log.go:172] (0xc00014c840) (0xc000754640) Create stream\nI0113 18:27:27.613232    2145 log.go:172] (0xc00014c840) (0xc000754640) Stream added, broadcasting: 1\nI0113 18:27:27.615729    2145 log.go:172] (0xc00014c840) Reply frame received for 1\nI0113 18:27:27.615765    2145 log.go:172] (0xc00014c840) (0xc0007546e0) Create stream\nI0113 18:27:27.615774    2145 log.go:172] (0xc00014c840) (0xc0007546e0) Stream added, broadcasting: 3\nI0113 18:27:27.617256    2145 log.go:172] (0xc00014c840) Reply frame received for 3\nI0113 18:27:27.617303    2145 log.go:172] (0xc00014c840) (0xc0007cec80) Create stream\nI0113 18:27:27.617328    2145 log.go:172] (0xc00014c840) (0xc0007cec80) Stream added, broadcasting: 5\nI0113 18:27:27.618286    2145 log.go:172] (0xc00014c840) Reply frame received for 5\nI0113 18:27:27.718151    2145 log.go:172] (0xc00014c840) Data frame received for 3\nI0113 18:27:27.718189    2145 log.go:172] (0xc0007546e0) (3) Data frame handling\nI0113 18:27:27.718208    2145 log.go:172] (0xc0007546e0) (3) Data frame sent\nI0113 18:27:27.718418    2145 log.go:172] (0xc00014c840) Data frame received for 3\nI0113 18:27:27.718551    2145 log.go:172] (0xc0007546e0) (3) Data frame handling\nI0113 18:27:27.718733    2145 log.go:172] (0xc00014c840) Data frame received for 5\nI0113 18:27:27.718744    2145 log.go:172] (0xc0007cec80) (5) Data frame handling\nI0113 18:27:27.720279    2145 log.go:172] (0xc00014c840) Data frame received for 1\nI0113 18:27:27.720329    2145 log.go:172] (0xc000754640) (1) Data frame handling\nI0113 18:27:27.720392    2145 log.go:172] (0xc000754640) (1) Data frame sent\nI0113 18:27:27.720425    2145 log.go:172] (0xc00014c840) (0xc000754640) Stream removed, broadcasting: 1\nI0113 18:27:27.720467    2145 log.go:172] (0xc00014c840) Go away received\nI0113 18:27:27.720585    2145 log.go:172] (0xc00014c840) (0xc000754640) Stream removed, broadcasting: 1\nI0113 18:27:27.720600    2145 log.go:172] (0xc00014c840) (0xc0007546e0) Stream removed, broadcasting: 3\nI0113 18:27:27.720609    2145 log.go:172] (0xc00014c840) (0xc0007cec80) Stream removed, broadcasting: 5\n"
Jan 13 18:27:27.724: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 13 18:27:27.724: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 13 18:27:37.752: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 13 18:27:47.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwf64 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:27:48.024: INFO: stderr: "I0113 18:27:47.951932    2167 log.go:172] (0xc00013c580) (0xc0007285a0) Create stream\nI0113 18:27:47.951992    2167 log.go:172] (0xc00013c580) (0xc0007285a0) Stream added, broadcasting: 1\nI0113 18:27:47.953980    2167 log.go:172] (0xc00013c580) Reply frame received for 1\nI0113 18:27:47.954009    2167 log.go:172] (0xc00013c580) (0xc0005a8c80) Create stream\nI0113 18:27:47.954018    2167 log.go:172] (0xc00013c580) (0xc0005a8c80) Stream added, broadcasting: 3\nI0113 18:27:47.954782    2167 log.go:172] (0xc00013c580) Reply frame received for 3\nI0113 18:27:47.954804    2167 log.go:172] (0xc00013c580) (0xc000728640) Create stream\nI0113 18:27:47.954810    2167 log.go:172] (0xc00013c580) (0xc000728640) Stream added, broadcasting: 5\nI0113 18:27:47.955524    2167 log.go:172] (0xc00013c580) Reply frame received for 5\nI0113 18:27:48.017123    2167 log.go:172] (0xc00013c580) Data frame received for 5\nI0113 18:27:48.017194    2167 log.go:172] (0xc00013c580) Data frame received for 3\nI0113 18:27:48.017252    2167 log.go:172] (0xc0005a8c80) (3) Data frame handling\nI0113 18:27:48.017283    2167 log.go:172] (0xc0005a8c80) (3) Data frame sent\nI0113 18:27:48.017295    2167 log.go:172] (0xc00013c580) Data frame received for 3\nI0113 18:27:48.017307    2167 log.go:172] (0xc0005a8c80) (3) Data frame handling\nI0113 18:27:48.017321    2167 log.go:172] (0xc000728640) (5) Data frame handling\nI0113 18:27:48.019290    2167 log.go:172] (0xc00013c580) Data frame received for 1\nI0113 18:27:48.019327    2167 log.go:172] (0xc0007285a0) (1) Data frame handling\nI0113 18:27:48.019362    2167 log.go:172] (0xc0007285a0) (1) Data frame sent\nI0113 18:27:48.019395    2167 log.go:172] (0xc00013c580) (0xc0007285a0) Stream removed, broadcasting: 1\nI0113 18:27:48.019426    2167 log.go:172] (0xc00013c580) Go away received\nI0113 18:27:48.019660    2167 log.go:172] (0xc00013c580) (0xc0007285a0) Stream removed, broadcasting: 1\nI0113 18:27:48.019709    2167 log.go:172] (0xc00013c580) (0xc0005a8c80) Stream removed, broadcasting: 3\nI0113 18:27:48.019724    2167 log.go:172] (0xc00013c580) (0xc000728640) Stream removed, broadcasting: 5\n"
Jan 13 18:27:48.025: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 13 18:27:48.025: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 13 18:28:08.046: INFO: Waiting for StatefulSet e2e-tests-statefulset-jwf64/ss2 to complete update
Jan 13 18:28:08.046: INFO: Waiting for Pod e2e-tests-statefulset-jwf64/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Jan 13 18:28:18.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwf64 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 13 18:28:18.313: INFO: stderr: "I0113 18:28:18.188803    2189 log.go:172] (0xc0006e4370) (0xc000720640) Create stream\nI0113 18:28:18.188962    2189 log.go:172] (0xc0006e4370) (0xc000720640) Stream added, broadcasting: 1\nI0113 18:28:18.191291    2189 log.go:172] (0xc0006e4370) Reply frame received for 1\nI0113 18:28:18.191330    2189 log.go:172] (0xc0006e4370) (0xc0007debe0) Create stream\nI0113 18:28:18.191345    2189 log.go:172] (0xc0006e4370) (0xc0007debe0) Stream added, broadcasting: 3\nI0113 18:28:18.192308    2189 log.go:172] (0xc0006e4370) Reply frame received for 3\nI0113 18:28:18.192346    2189 log.go:172] (0xc0006e4370) (0xc0006c8000) Create stream\nI0113 18:28:18.192357    2189 log.go:172] (0xc0006e4370) (0xc0006c8000) Stream added, broadcasting: 5\nI0113 18:28:18.193492    2189 log.go:172] (0xc0006e4370) Reply frame received for 5\nI0113 18:28:18.305034    2189 log.go:172] (0xc0006e4370) Data frame received for 3\nI0113 18:28:18.305067    2189 log.go:172] (0xc0007debe0) (3) Data frame handling\nI0113 18:28:18.305144    2189 log.go:172] (0xc0007debe0) (3) Data frame sent\nI0113 18:28:18.305536    2189 log.go:172] (0xc0006e4370) Data frame received for 3\nI0113 18:28:18.305587    2189 log.go:172] (0xc0007debe0) (3) Data frame handling\nI0113 18:28:18.305611    2189 log.go:172] (0xc0006e4370) Data frame received for 5\nI0113 18:28:18.305637    2189 log.go:172] (0xc0006c8000) (5) Data frame handling\nI0113 18:28:18.308366    2189 log.go:172] (0xc0006e4370) Data frame received for 1\nI0113 18:28:18.308388    2189 log.go:172] (0xc000720640) (1) Data frame handling\nI0113 18:28:18.308400    2189 log.go:172] (0xc000720640) (1) Data frame sent\nI0113 18:28:18.308416    2189 log.go:172] (0xc0006e4370) (0xc000720640) Stream removed, broadcasting: 1\nI0113 18:28:18.308645    2189 log.go:172] (0xc0006e4370) (0xc000720640) Stream removed, broadcasting: 1\nI0113 18:28:18.308660    2189 log.go:172] (0xc0006e4370) (0xc0007debe0) Stream removed, broadcasting: 3\nI0113 18:28:18.308670    2189 log.go:172] (0xc0006e4370) (0xc0006c8000) Stream removed, broadcasting: 5\n"
Jan 13 18:28:18.313: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 13 18:28:18.313: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 13 18:28:28.362: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 13 18:28:38.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwf64 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:28:38.603: INFO: stderr: "I0113 18:28:38.535251    2212 log.go:172] (0xc00083a2c0) (0xc00067b540) Create stream\nI0113 18:28:38.535307    2212 log.go:172] (0xc00083a2c0) (0xc00067b540) Stream added, broadcasting: 1\nI0113 18:28:38.539843    2212 log.go:172] (0xc00083a2c0) Reply frame received for 1\nI0113 18:28:38.540250    2212 log.go:172] (0xc00083a2c0) (0xc0005e0000) Create stream\nI0113 18:28:38.540331    2212 log.go:172] (0xc00083a2c0) (0xc0005e0000) Stream added, broadcasting: 3\nI0113 18:28:38.543042    2212 log.go:172] (0xc00083a2c0) Reply frame received for 3\nI0113 18:28:38.543113    2212 log.go:172] (0xc00083a2c0) (0xc00067b5e0) Create stream\nI0113 18:28:38.543138    2212 log.go:172] (0xc00083a2c0) (0xc00067b5e0) Stream added, broadcasting: 5\nI0113 18:28:38.544146    2212 log.go:172] (0xc00083a2c0) Reply frame received for 5\nI0113 18:28:38.598184    2212 log.go:172] (0xc00083a2c0) Data frame received for 5\nI0113 18:28:38.598240    2212 log.go:172] (0xc00067b5e0) (5) Data frame handling\nI0113 18:28:38.598267    2212 log.go:172] (0xc00083a2c0) Data frame received for 3\nI0113 18:28:38.598283    2212 log.go:172] (0xc0005e0000) (3) Data frame handling\nI0113 18:28:38.598303    2212 log.go:172] (0xc0005e0000) (3) Data frame sent\nI0113 18:28:38.598315    2212 log.go:172] (0xc00083a2c0) Data frame received for 3\nI0113 18:28:38.598323    2212 log.go:172] (0xc0005e0000) (3) Data frame handling\nI0113 18:28:38.599659    2212 log.go:172] (0xc00083a2c0) Data frame received for 1\nI0113 18:28:38.599698    2212 log.go:172] (0xc00067b540) (1) Data frame handling\nI0113 18:28:38.599720    2212 log.go:172] (0xc00067b540) (1) Data frame sent\nI0113 18:28:38.599758    2212 log.go:172] (0xc00083a2c0) (0xc00067b540) Stream removed, broadcasting: 1\nI0113 18:28:38.599803    2212 log.go:172] (0xc00083a2c0) Go away received\nI0113 18:28:38.600023    2212 log.go:172] (0xc00083a2c0) (0xc00067b540) Stream removed, broadcasting: 1\nI0113 18:28:38.600060    2212 log.go:172] (0xc00083a2c0) (0xc0005e0000) Stream removed, broadcasting: 3\nI0113 18:28:38.600075    2212 log.go:172] (0xc00083a2c0) (0xc00067b5e0) Stream removed, broadcasting: 5\n"
Jan 13 18:28:38.604: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 13 18:28:38.604: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 13 18:28:58.634: INFO: Deleting all statefulset in ns e2e-tests-statefulset-jwf64
Jan 13 18:28:58.636: INFO: Scaling statefulset ss2 to 0
Jan 13 18:29:18.659: INFO: Waiting for statefulset status.replicas updated to 0
Jan 13 18:29:18.662: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:29:18.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-jwf64" for this suite.
Jan 13 18:29:26.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:29:26.786: INFO: namespace: e2e-tests-statefulset-jwf64, resource: bindings, ignored listing per whitelist
Jan 13 18:29:26.786: INFO: namespace e2e-tests-statefulset-jwf64 deletion completed in 8.10249114s

• [SLOW TEST:139.513 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:29:26.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 13 18:29:26.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:29:30.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-4rbkn" for this suite.
Jan 13 18:30:21.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:30:21.096: INFO: namespace: e2e-tests-pods-4rbkn, resource: bindings, ignored listing per whitelist
Jan 13 18:30:21.130: INFO: namespace e2e-tests-pods-4rbkn deletion completed in 50.147830018s

• [SLOW TEST:54.345 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:30:21.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-mjm68
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-mjm68
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-mjm68
Jan 13 18:30:21.283: INFO: Found 0 stateful pods, waiting for 1
Jan 13 18:30:31.288: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 13 18:30:31.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mjm68 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 13 18:30:31.564: INFO: stderr: "I0113 18:30:31.427439    2235 log.go:172] (0xc000154840) (0xc000748640) Create stream\nI0113 18:30:31.427507    2235 log.go:172] (0xc000154840) (0xc000748640) Stream added, broadcasting: 1\nI0113 18:30:31.429674    2235 log.go:172] (0xc000154840) Reply frame received for 1\nI0113 18:30:31.429719    2235 log.go:172] (0xc000154840) (0xc0006a2be0) Create stream\nI0113 18:30:31.429735    2235 log.go:172] (0xc000154840) (0xc0006a2be0) Stream added, broadcasting: 3\nI0113 18:30:31.430754    2235 log.go:172] (0xc000154840) Reply frame received for 3\nI0113 18:30:31.430815    2235 log.go:172] (0xc000154840) (0xc000348000) Create stream\nI0113 18:30:31.430832    2235 log.go:172] (0xc000154840) (0xc000348000) Stream added, broadcasting: 5\nI0113 18:30:31.431712    2235 log.go:172] (0xc000154840) Reply frame received for 5\nI0113 18:30:31.558013    2235 log.go:172] (0xc000154840) Data frame received for 3\nI0113 18:30:31.558048    2235 log.go:172] (0xc0006a2be0) (3) Data frame handling\nI0113 18:30:31.558067    2235 log.go:172] (0xc0006a2be0) (3) Data frame sent\nI0113 18:30:31.558344    2235 log.go:172] (0xc000154840) Data frame received for 5\nI0113 18:30:31.558376    2235 log.go:172] (0xc000348000) (5) Data frame handling\nI0113 18:30:31.558662    2235 log.go:172] (0xc000154840) Data frame received for 3\nI0113 18:30:31.558686    2235 log.go:172] (0xc0006a2be0) (3) Data frame handling\nI0113 18:30:31.560352    2235 log.go:172] (0xc000154840) Data frame received for 1\nI0113 18:30:31.560391    2235 log.go:172] (0xc000748640) (1) Data frame handling\nI0113 18:30:31.560414    2235 log.go:172] (0xc000748640) (1) Data frame sent\nI0113 18:30:31.560449    2235 log.go:172] (0xc000154840) (0xc000748640) Stream removed, broadcasting: 1\nI0113 18:30:31.560471    2235 log.go:172] (0xc000154840) Go away received\nI0113 18:30:31.560636    2235 log.go:172] (0xc000154840) (0xc000748640) Stream removed, broadcasting: 1\nI0113 18:30:31.560656    2235 log.go:172] (0xc000154840) (0xc0006a2be0) Stream removed, broadcasting: 3\nI0113 18:30:31.560666    2235 log.go:172] (0xc000154840) (0xc000348000) Stream removed, broadcasting: 5\n"
Jan 13 18:30:31.565: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 13 18:30:31.565: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 13 18:30:31.568: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 13 18:30:41.572: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 13 18:30:41.572: INFO: Waiting for statefulset status.replicas updated to 0
Jan 13 18:30:41.590: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999252s
Jan 13 18:30:42.594: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995620907s
Jan 13 18:30:43.599: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.990878707s
Jan 13 18:30:44.604: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.986036345s
Jan 13 18:30:45.609: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.98131605s
Jan 13 18:30:46.613: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.976688599s
Jan 13 18:30:47.618: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.971869369s
Jan 13 18:30:48.623: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.967222908s
Jan 13 18:30:49.628: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.962086049s
Jan 13 18:30:50.632: INFO: Verifying statefulset ss doesn't scale past 1 for another 957.098045ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-mjm68
Jan 13 18:30:51.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mjm68 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:30:51.852: INFO: stderr: "I0113 18:30:51.767726    2257 log.go:172] (0xc000748370) (0xc00077e640) Create stream\nI0113 18:30:51.767781    2257 log.go:172] (0xc000748370) (0xc00077e640) Stream added, broadcasting: 1\nI0113 18:30:51.769923    2257 log.go:172] (0xc000748370) Reply frame received for 1\nI0113 18:30:51.769987    2257 log.go:172] (0xc000748370) (0xc000698f00) Create stream\nI0113 18:30:51.770020    2257 log.go:172] (0xc000748370) (0xc000698f00) Stream added, broadcasting: 3\nI0113 18:30:51.770898    2257 log.go:172] (0xc000748370) Reply frame received for 3\nI0113 18:30:51.770934    2257 log.go:172] (0xc000748370) (0xc0005e2000) Create stream\nI0113 18:30:51.770945    2257 log.go:172] (0xc000748370) (0xc0005e2000) Stream added, broadcasting: 5\nI0113 18:30:51.771780    2257 log.go:172] (0xc000748370) Reply frame received for 5\nI0113 18:30:51.844649    2257 log.go:172] (0xc000748370) Data frame received for 3\nI0113 18:30:51.844789    2257 log.go:172] (0xc000698f00) (3) Data frame handling\nI0113 18:30:51.844815    2257 log.go:172] (0xc000698f00) (3) Data frame sent\nI0113 18:30:51.844956    2257 log.go:172] (0xc000748370) Data frame received for 3\nI0113 18:30:51.844978    2257 log.go:172] (0xc000698f00) (3) Data frame handling\nI0113 18:30:51.845209    2257 log.go:172] (0xc000748370) Data frame received for 5\nI0113 18:30:51.845286    2257 log.go:172] (0xc0005e2000) (5) Data frame handling\nI0113 18:30:51.848023    2257 log.go:172] (0xc000748370) Data frame received for 1\nI0113 18:30:51.848036    2257 log.go:172] (0xc00077e640) (1) Data frame handling\nI0113 18:30:51.848042    2257 log.go:172] (0xc00077e640) (1) Data frame sent\nI0113 18:30:51.848052    2257 log.go:172] (0xc000748370) (0xc00077e640) Stream removed, broadcasting: 1\nI0113 18:30:51.848150    2257 log.go:172] (0xc000748370) Go away received\nI0113 18:30:51.848362    2257 log.go:172] (0xc000748370) (0xc00077e640) Stream removed, broadcasting: 1\nI0113 18:30:51.848422    2257 log.go:172] (0xc000748370) (0xc000698f00) Stream removed, broadcasting: 3\nI0113 18:30:51.848451    2257 log.go:172] (0xc000748370) (0xc0005e2000) Stream removed, broadcasting: 5\n"
Jan 13 18:30:51.853: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 13 18:30:51.853: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 13 18:30:51.857: INFO: Found 1 stateful pods, waiting for 3
Jan 13 18:31:01.861: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 13 18:31:01.861: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 13 18:31:01.861: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 13 18:31:01.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mjm68 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 13 18:31:02.068: INFO: stderr: "I0113 18:31:01.994970    2280 log.go:172] (0xc000138790) (0xc0005532c0) Create stream\nI0113 18:31:01.995022    2280 log.go:172] (0xc000138790) (0xc0005532c0) Stream added, broadcasting: 1\nI0113 18:31:01.997244    2280 log.go:172] (0xc000138790) Reply frame received for 1\nI0113 18:31:01.997285    2280 log.go:172] (0xc000138790) (0xc000553360) Create stream\nI0113 18:31:01.997298    2280 log.go:172] (0xc000138790) (0xc000553360) Stream added, broadcasting: 3\nI0113 18:31:01.998041    2280 log.go:172] (0xc000138790) Reply frame received for 3\nI0113 18:31:01.998071    2280 log.go:172] (0xc000138790) (0xc000553400) Create stream\nI0113 18:31:01.998079    2280 log.go:172] (0xc000138790) (0xc000553400) Stream added, broadcasting: 5\nI0113 18:31:01.998855    2280 log.go:172] (0xc000138790) Reply frame received for 5\nI0113 18:31:02.061159    2280 log.go:172] (0xc000138790) Data frame received for 5\nI0113 18:31:02.061196    2280 log.go:172] (0xc000553400) (5) Data frame handling\nI0113 18:31:02.061223    2280 log.go:172] (0xc000138790) Data frame received for 3\nI0113 18:31:02.061234    2280 log.go:172] (0xc000553360) (3) Data frame handling\nI0113 18:31:02.061244    2280 log.go:172] (0xc000553360) (3) Data frame sent\nI0113 18:31:02.061250    2280 log.go:172] (0xc000138790) Data frame received for 3\nI0113 18:31:02.061255    2280 log.go:172] (0xc000553360) (3) Data frame handling\nI0113 18:31:02.062466    2280 log.go:172] (0xc000138790) Data frame received for 1\nI0113 18:31:02.062487    2280 log.go:172] (0xc0005532c0) (1) Data frame handling\nI0113 18:31:02.062496    2280 log.go:172] (0xc0005532c0) (1) Data frame sent\nI0113 18:31:02.062506    2280 log.go:172] (0xc000138790) (0xc0005532c0) Stream removed, broadcasting: 1\nI0113 18:31:02.062517    2280 log.go:172] (0xc000138790) Go away received\nI0113 18:31:02.062805    2280 log.go:172] (0xc000138790) (0xc0005532c0) Stream removed, broadcasting: 1\nI0113 18:31:02.062836    2280 log.go:172] (0xc000138790) (0xc000553360) Stream removed, broadcasting: 3\nI0113 18:31:02.062851    2280 log.go:172] (0xc000138790) (0xc000553400) Stream removed, broadcasting: 5\n"
Jan 13 18:31:02.068: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 13 18:31:02.069: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 13 18:31:02.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mjm68 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 13 18:31:02.290: INFO: stderr: "I0113 18:31:02.186753    2301 log.go:172] (0xc000778370) (0xc0003292c0) Create stream\nI0113 18:31:02.186804    2301 log.go:172] (0xc000778370) (0xc0003292c0) Stream added, broadcasting: 1\nI0113 18:31:02.189811    2301 log.go:172] (0xc000778370) Reply frame received for 1\nI0113 18:31:02.189866    2301 log.go:172] (0xc000778370) (0xc000120000) Create stream\nI0113 18:31:02.189903    2301 log.go:172] (0xc000778370) (0xc000120000) Stream added, broadcasting: 3\nI0113 18:31:02.191144    2301 log.go:172] (0xc000778370) Reply frame received for 3\nI0113 18:31:02.191190    2301 log.go:172] (0xc000778370) (0xc000329360) Create stream\nI0113 18:31:02.191203    2301 log.go:172] (0xc000778370) (0xc000329360) Stream added, broadcasting: 5\nI0113 18:31:02.193540    2301 log.go:172] (0xc000778370) Reply frame received for 5\nI0113 18:31:02.283282    2301 log.go:172] (0xc000778370) Data frame received for 5\nI0113 18:31:02.283332    2301 log.go:172] (0xc000778370) Data frame received for 3\nI0113 18:31:02.283369    2301 log.go:172] (0xc000120000) (3) Data frame handling\nI0113 18:31:02.283387    2301 log.go:172] (0xc000120000) (3) Data frame sent\nI0113 18:31:02.283400    2301 log.go:172] (0xc000778370) Data frame received for 3\nI0113 18:31:02.283410    2301 log.go:172] (0xc000120000) (3) Data frame handling\nI0113 18:31:02.283456    2301 log.go:172] (0xc000329360) (5) Data frame handling\nI0113 18:31:02.285348    2301 log.go:172] (0xc000778370) Data frame received for 1\nI0113 18:31:02.285397    2301 log.go:172] (0xc0003292c0) (1) Data frame handling\nI0113 18:31:02.285420    2301 log.go:172] (0xc0003292c0) (1) Data frame sent\nI0113 18:31:02.285454    2301 log.go:172] (0xc000778370) (0xc0003292c0) Stream removed, broadcasting: 1\nI0113 18:31:02.285496    2301 log.go:172] (0xc000778370) Go away received\nI0113 18:31:02.285719    2301 log.go:172] (0xc000778370) (0xc0003292c0) Stream removed, broadcasting: 1\nI0113 18:31:02.285755    2301 log.go:172] (0xc000778370) (0xc000120000) Stream removed, broadcasting: 3\nI0113 18:31:02.285787    2301 log.go:172] (0xc000778370) (0xc000329360) Stream removed, broadcasting: 5\n"
Jan 13 18:31:02.290: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 13 18:31:02.290: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 13 18:31:02.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mjm68 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 13 18:31:02.535: INFO: stderr: "I0113 18:31:02.416525    2324 log.go:172] (0xc000162840) (0xc00074e640) Create stream\nI0113 18:31:02.416599    2324 log.go:172] (0xc000162840) (0xc00074e640) Stream added, broadcasting: 1\nI0113 18:31:02.419147    2324 log.go:172] (0xc000162840) Reply frame received for 1\nI0113 18:31:02.419177    2324 log.go:172] (0xc000162840) (0xc00074e6e0) Create stream\nI0113 18:31:02.419186    2324 log.go:172] (0xc000162840) (0xc00074e6e0) Stream added, broadcasting: 3\nI0113 18:31:02.420014    2324 log.go:172] (0xc000162840) Reply frame received for 3\nI0113 18:31:02.420075    2324 log.go:172] (0xc000162840) (0xc0005fcc80) Create stream\nI0113 18:31:02.420110    2324 log.go:172] (0xc000162840) (0xc0005fcc80) Stream added, broadcasting: 5\nI0113 18:31:02.420987    2324 log.go:172] (0xc000162840) Reply frame received for 5\nI0113 18:31:02.528606    2324 log.go:172] (0xc000162840) Data frame received for 3\nI0113 18:31:02.528638    2324 log.go:172] (0xc00074e6e0) (3) Data frame handling\nI0113 18:31:02.528653    2324 log.go:172] (0xc00074e6e0) (3) Data frame sent\nI0113 18:31:02.528661    2324 log.go:172] (0xc000162840) Data frame received for 3\nI0113 18:31:02.528671    2324 log.go:172] (0xc00074e6e0) (3) Data frame handling\nI0113 18:31:02.528765    2324 log.go:172] (0xc000162840) Data frame received for 5\nI0113 18:31:02.528794    2324 log.go:172] (0xc0005fcc80) (5) Data frame handling\nI0113 18:31:02.531043    2324 log.go:172] (0xc000162840) Data frame received for 1\nI0113 18:31:02.531065    2324 log.go:172] (0xc00074e640) (1) Data frame handling\nI0113 18:31:02.531079    2324 log.go:172] (0xc00074e640) (1) Data frame sent\nI0113 18:31:02.531149    2324 log.go:172] (0xc000162840) (0xc00074e640) Stream removed, broadcasting: 1\nI0113 18:31:02.531200    2324 log.go:172] (0xc000162840) Go away received\nI0113 18:31:02.531383    2324 log.go:172] (0xc000162840) (0xc00074e640) Stream removed, broadcasting: 1\nI0113 18:31:02.531397    2324 log.go:172] (0xc000162840) (0xc00074e6e0) Stream removed, broadcasting: 3\nI0113 18:31:02.531408    2324 log.go:172] (0xc000162840) (0xc0005fcc80) Stream removed, broadcasting: 5\n"
Jan 13 18:31:02.535: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 13 18:31:02.535: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 13 18:31:02.535: INFO: Waiting for statefulset status.replicas updated to 0
Jan 13 18:31:02.539: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 13 18:31:12.547: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 13 18:31:12.547: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 13 18:31:12.547: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 13 18:31:12.561: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999469s
Jan 13 18:31:13.569: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993803194s
Jan 13 18:31:14.574: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.985658976s
Jan 13 18:31:15.580: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.980145982s
Jan 13 18:31:16.585: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.974848063s
Jan 13 18:31:17.590: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.969669114s
Jan 13 18:31:18.595: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.96473595s
Jan 13 18:31:19.601: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.959099079s
Jan 13 18:31:20.607: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.953160586s
Jan 13 18:31:21.613: INFO: Verifying statefulset ss doesn't scale past 3 for another 947.572138ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-mjm68
Jan 13 18:31:22.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mjm68 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:31:22.842: INFO: stderr: "I0113 18:31:22.750086    2347 log.go:172] (0xc0007184d0) (0xc000742640) Create stream\nI0113 18:31:22.750166    2347 log.go:172] (0xc0007184d0) (0xc000742640) Stream added, broadcasting: 1\nI0113 18:31:22.752317    2347 log.go:172] (0xc0007184d0) Reply frame received for 1\nI0113 18:31:22.752377    2347 log.go:172] (0xc0007184d0) (0xc0007e4d20) Create stream\nI0113 18:31:22.752402    2347 log.go:172] (0xc0007184d0) (0xc0007e4d20) Stream added, broadcasting: 3\nI0113 18:31:22.753458    2347 log.go:172] (0xc0007184d0) Reply frame received for 3\nI0113 18:31:22.753508    2347 log.go:172] (0xc0007184d0) (0xc0007426e0) Create stream\nI0113 18:31:22.753523    2347 log.go:172] (0xc0007184d0) (0xc0007426e0) Stream added, broadcasting: 5\nI0113 18:31:22.754206    2347 log.go:172] (0xc0007184d0) Reply frame received for 5\nI0113 18:31:22.837325    2347 log.go:172] (0xc0007184d0) Data frame received for 5\nI0113 18:31:22.837355    2347 log.go:172] (0xc0007426e0) (5) Data frame handling\nI0113 18:31:22.837417    2347 log.go:172] (0xc0007184d0) Data frame received for 3\nI0113 18:31:22.837441    2347 log.go:172] (0xc0007e4d20) (3) Data frame handling\nI0113 18:31:22.837458    2347 log.go:172] (0xc0007e4d20) (3) Data frame sent\nI0113 18:31:22.837474    2347 log.go:172] (0xc0007184d0) Data frame received for 3\nI0113 18:31:22.837480    2347 log.go:172] (0xc0007e4d20) (3) Data frame handling\nI0113 18:31:22.838821    2347 log.go:172] (0xc0007184d0) Data frame received for 1\nI0113 18:31:22.838849    2347 log.go:172] (0xc000742640) (1) Data frame handling\nI0113 18:31:22.838868    2347 log.go:172] (0xc000742640) (1) Data frame sent\nI0113 18:31:22.838877    2347 log.go:172] (0xc0007184d0) (0xc000742640) Stream removed, broadcasting: 1\nI0113 18:31:22.838894    2347 log.go:172] (0xc0007184d0) Go away received\nI0113 18:31:22.839086    2347 log.go:172] (0xc0007184d0) (0xc000742640) Stream removed, broadcasting: 1\nI0113 18:31:22.839108    2347 log.go:172] (0xc0007184d0) (0xc0007e4d20) Stream removed, broadcasting: 3\nI0113 18:31:22.839117    2347 log.go:172] (0xc0007184d0) (0xc0007426e0) Stream removed, broadcasting: 5\n"
Jan 13 18:31:22.842: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 13 18:31:22.842: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 13 18:31:22.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mjm68 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:31:23.044: INFO: stderr: "I0113 18:31:22.968654    2370 log.go:172] (0xc0003904d0) (0xc0006d1360) Create stream\nI0113 18:31:22.968711    2370 log.go:172] (0xc0003904d0) (0xc0006d1360) Stream added, broadcasting: 1\nI0113 18:31:22.971033    2370 log.go:172] (0xc0003904d0) Reply frame received for 1\nI0113 18:31:22.971082    2370 log.go:172] (0xc0003904d0) (0xc000120000) Create stream\nI0113 18:31:22.971093    2370 log.go:172] (0xc0003904d0) (0xc000120000) Stream added, broadcasting: 3\nI0113 18:31:22.972204    2370 log.go:172] (0xc0003904d0) Reply frame received for 3\nI0113 18:31:22.972244    2370 log.go:172] (0xc0003904d0) (0xc0001200a0) Create stream\nI0113 18:31:22.972255    2370 log.go:172] (0xc0003904d0) (0xc0001200a0) Stream added, broadcasting: 5\nI0113 18:31:22.973184    2370 log.go:172] (0xc0003904d0) Reply frame received for 5\nI0113 18:31:23.037545    2370 log.go:172] (0xc0003904d0) Data frame received for 3\nI0113 18:31:23.037579    2370 log.go:172] (0xc000120000) (3) Data frame handling\nI0113 18:31:23.037602    2370 log.go:172] (0xc000120000) (3) Data frame sent\nI0113 18:31:23.037613    2370 log.go:172] (0xc0003904d0) Data frame received for 3\nI0113 18:31:23.037627    2370 log.go:172] (0xc000120000) (3) Data frame handling\nI0113 18:31:23.037687    2370 log.go:172] (0xc0003904d0) Data frame received for 5\nI0113 18:31:23.037703    2370 log.go:172] (0xc0001200a0) (5) Data frame handling\nI0113 18:31:23.039408    2370 log.go:172] (0xc0003904d0) Data frame received for 1\nI0113 18:31:23.039455    2370 log.go:172] (0xc0006d1360) (1) Data frame handling\nI0113 18:31:23.039492    2370 log.go:172] (0xc0006d1360) (1) Data frame sent\nI0113 18:31:23.039525    2370 log.go:172] (0xc0003904d0) (0xc0006d1360) Stream removed, broadcasting: 1\nI0113 18:31:23.039557    2370 log.go:172] (0xc0003904d0) Go away received\nI0113 18:31:23.039820    2370 log.go:172] (0xc0003904d0) (0xc0006d1360) Stream removed, broadcasting: 1\nI0113 18:31:23.039836    2370 log.go:172] (0xc0003904d0) (0xc000120000) Stream removed, broadcasting: 3\nI0113 18:31:23.039842    2370 log.go:172] (0xc0003904d0) (0xc0001200a0) Stream removed, broadcasting: 5\n"
Jan 13 18:31:23.044: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 13 18:31:23.044: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 13 18:31:23.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mjm68 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:31:23.275: INFO: stderr: "I0113 18:31:23.193506    2393 log.go:172] (0xc00015c840) (0xc000736640) Create stream\nI0113 18:31:23.193588    2393 log.go:172] (0xc00015c840) (0xc000736640) Stream added, broadcasting: 1\nI0113 18:31:23.195808    2393 log.go:172] (0xc00015c840) Reply frame received for 1\nI0113 18:31:23.195854    2393 log.go:172] (0xc00015c840) (0xc0007366e0) Create stream\nI0113 18:31:23.195868    2393 log.go:172] (0xc00015c840) (0xc0007366e0) Stream added, broadcasting: 3\nI0113 18:31:23.196825    2393 log.go:172] (0xc00015c840) Reply frame received for 3\nI0113 18:31:23.196972    2393 log.go:172] (0xc00015c840) (0xc00066cf00) Create stream\nI0113 18:31:23.196989    2393 log.go:172] (0xc00015c840) (0xc00066cf00) Stream added, broadcasting: 5\nI0113 18:31:23.198040    2393 log.go:172] (0xc00015c840) Reply frame received for 5\nI0113 18:31:23.268741    2393 log.go:172] (0xc00015c840) Data frame received for 5\nI0113 18:31:23.268781    2393 log.go:172] (0xc00066cf00) (5) Data frame handling\nI0113 18:31:23.268811    2393 log.go:172] (0xc00015c840) Data frame received for 3\nI0113 18:31:23.268822    2393 log.go:172] (0xc0007366e0) (3) Data frame handling\nI0113 18:31:23.268829    2393 log.go:172] (0xc0007366e0) (3) Data frame sent\nI0113 18:31:23.268926    2393 log.go:172] (0xc00015c840) Data frame received for 3\nI0113 18:31:23.268939    2393 log.go:172] (0xc0007366e0) (3) Data frame handling\nI0113 18:31:23.270576    2393 log.go:172] (0xc00015c840) Data frame received for 1\nI0113 18:31:23.270591    2393 log.go:172] (0xc000736640) (1) Data frame handling\nI0113 18:31:23.270600    2393 log.go:172] (0xc000736640) (1) Data frame sent\nI0113 18:31:23.270619    2393 log.go:172] (0xc00015c840) (0xc000736640) Stream removed, broadcasting: 1\nI0113 18:31:23.270645    2393 log.go:172] (0xc00015c840) Go away received\nI0113 18:31:23.270828    2393 log.go:172] (0xc00015c840) (0xc000736640) Stream removed, broadcasting: 1\nI0113 18:31:23.270848    2393 log.go:172] (0xc00015c840) (0xc0007366e0) Stream removed, broadcasting: 3\nI0113 18:31:23.270859    2393 log.go:172] (0xc00015c840) (0xc00066cf00) Stream removed, broadcasting: 5\n"
Jan 13 18:31:23.275: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 13 18:31:23.275: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 13 18:31:23.275: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 13 18:31:53.293: INFO: Deleting all statefulset in ns e2e-tests-statefulset-mjm68
Jan 13 18:31:53.296: INFO: Scaling statefulset ss to 0
Jan 13 18:31:53.304: INFO: Waiting for statefulset status.replicas updated to 0
Jan 13 18:31:53.307: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:31:53.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-mjm68" for this suite.
Jan 13 18:31:59.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:31:59.428: INFO: namespace: e2e-tests-statefulset-mjm68, resource: bindings, ignored listing per whitelist
Jan 13 18:31:59.446: INFO: namespace e2e-tests-statefulset-mjm68 deletion completed in 6.110313765s

• [SLOW TEST:98.315 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:31:59.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-pvbxw.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-pvbxw.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-pvbxw.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-pvbxw.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-pvbxw.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-pvbxw.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 13 18:32:05.667: INFO: DNS probes using e2e-tests-dns-pvbxw/dns-test-9f5fd540-55cd-11eb-8355-0242ac110009 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:32:05.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-pvbxw" for this suite.
Jan 13 18:32:11.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:32:11.776: INFO: namespace: e2e-tests-dns-pvbxw, resource: bindings, ignored listing per whitelist
Jan 13 18:32:11.832: INFO: namespace e2e-tests-dns-pvbxw deletion completed in 6.09537686s

• [SLOW TEST:12.386 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:32:11.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 13 18:32:20.121: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 13 18:32:20.130: INFO: Pod pod-with-prestop-http-hook still exists
Jan 13 18:32:22.131: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 13 18:32:22.135: INFO: Pod pod-with-prestop-http-hook still exists
Jan 13 18:32:24.131: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 13 18:32:24.135: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:32:24.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-r9z25" for this suite.
Jan 13 18:32:46.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:32:46.201: INFO: namespace: e2e-tests-container-lifecycle-hook-r9z25, resource: bindings, ignored listing per whitelist
Jan 13 18:32:46.284: INFO: namespace e2e-tests-container-lifecycle-hook-r9z25 deletion completed in 22.139121733s

• [SLOW TEST:34.452 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:32:46.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-bb493c7c-55cd-11eb-8355-0242ac110009
STEP: Creating a pod to test consume secrets
Jan 13 18:32:46.427: INFO: Waiting up to 5m0s for pod "pod-secrets-bb4bd324-55cd-11eb-8355-0242ac110009" in namespace "e2e-tests-secrets-h7wmp" to be "success or failure"
Jan 13 18:32:46.431: INFO: Pod "pod-secrets-bb4bd324-55cd-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.539938ms
Jan 13 18:32:48.435: INFO: Pod "pod-secrets-bb4bd324-55cd-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008392221s
Jan 13 18:32:50.439: INFO: Pod "pod-secrets-bb4bd324-55cd-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012603s
STEP: Saw pod success
Jan 13 18:32:50.439: INFO: Pod "pod-secrets-bb4bd324-55cd-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:32:50.442: INFO: Trying to get logs from node hunter-control-plane pod pod-secrets-bb4bd324-55cd-11eb-8355-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Jan 13 18:32:50.669: INFO: Waiting for pod pod-secrets-bb4bd324-55cd-11eb-8355-0242ac110009 to disappear
Jan 13 18:32:50.682: INFO: Pod pod-secrets-bb4bd324-55cd-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:32:50.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-h7wmp" for this suite.
Jan 13 18:32:56.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:32:56.756: INFO: namespace: e2e-tests-secrets-h7wmp, resource: bindings, ignored listing per whitelist
Jan 13 18:32:56.813: INFO: namespace e2e-tests-secrets-h7wmp deletion completed in 6.127585317s

• [SLOW TEST:10.529 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:32:56.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jan 13 18:33:01.035: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-c193cab7-55cd-11eb-8355-0242ac110009", GenerateName:"", Namespace:"e2e-tests-pods-rch98", SelfLink:"/api/v1/namespaces/e2e-tests-pods-rch98/pods/pod-submit-remove-c193cab7-55cd-11eb-8355-0242ac110009", UID:"c19f2d92-55cd-11eb-9c75-0242ac12000b", ResourceVersion:"497482", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63746159577, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"927722145"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8gfc8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00196d040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8gfc8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001d591d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-control-plane", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000e5a720), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001d59220)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001d59370)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001d59378), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001d5937c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746159577, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746159580, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746159580, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746159577, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.11", PodIP:"10.244.0.234", StartTime:(*v1.Time)(0xc002afe960), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002afe980), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://a1ebca63ba140e74ec9057a092747e6b959d04f48c586d7872cbf2773fbb78f0"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jan 13 18:33:06.051: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:33:06.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-rch98" for this suite.
Jan 13 18:33:12.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:33:12.116: INFO: namespace: e2e-tests-pods-rch98, resource: bindings, ignored listing per whitelist
Jan 13 18:33:12.159: INFO: namespace e2e-tests-pods-rch98 deletion completed in 6.100988696s

• [SLOW TEST:15.346 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:33:12.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-cab5c0b0-55cd-11eb-8355-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan 13 18:33:12.272: INFO: Waiting up to 5m0s for pod "pod-configmaps-cab7fa52-55cd-11eb-8355-0242ac110009" in namespace "e2e-tests-configmap-p5bgd" to be "success or failure"
Jan 13 18:33:12.302: INFO: Pod "pod-configmaps-cab7fa52-55cd-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 29.555771ms
Jan 13 18:33:14.338: INFO: Pod "pod-configmaps-cab7fa52-55cd-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065913223s
Jan 13 18:33:16.342: INFO: Pod "pod-configmaps-cab7fa52-55cd-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069610911s
STEP: Saw pod success
Jan 13 18:33:16.342: INFO: Pod "pod-configmaps-cab7fa52-55cd-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:33:16.344: INFO: Trying to get logs from node hunter-control-plane pod pod-configmaps-cab7fa52-55cd-11eb-8355-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Jan 13 18:33:16.561: INFO: Waiting for pod pod-configmaps-cab7fa52-55cd-11eb-8355-0242ac110009 to disappear
Jan 13 18:33:16.587: INFO: Pod pod-configmaps-cab7fa52-55cd-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:33:16.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-p5bgd" for this suite.
Jan 13 18:33:22.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:33:22.798: INFO: namespace: e2e-tests-configmap-p5bgd, resource: bindings, ignored listing per whitelist
Jan 13 18:33:22.893: INFO: namespace e2e-tests-configmap-p5bgd deletion completed in 6.302107323s

• [SLOW TEST:10.734 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:33:22.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-d122e2da-55cd-11eb-8355-0242ac110009
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-d122e2da-55cd-11eb-8355-0242ac110009
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:33:29.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-97scj" for this suite.
Jan 13 18:33:51.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:33:51.151: INFO: namespace: e2e-tests-projected-97scj, resource: bindings, ignored listing per whitelist
Jan 13 18:33:51.236: INFO: namespace e2e-tests-projected-97scj deletion completed in 22.132650494s

• [SLOW TEST:28.343 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:33:51.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-e208c41d-55cd-11eb-8355-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan 13 18:33:51.399: INFO: Waiting up to 5m0s for pod "pod-configmaps-e20a44dd-55cd-11eb-8355-0242ac110009" in namespace "e2e-tests-configmap-kdl66" to be "success or failure"
Jan 13 18:33:51.402: INFO: Pod "pod-configmaps-e20a44dd-55cd-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.730276ms
Jan 13 18:33:53.618: INFO: Pod "pod-configmaps-e20a44dd-55cd-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218942687s
Jan 13 18:33:55.622: INFO: Pod "pod-configmaps-e20a44dd-55cd-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.22289211s
STEP: Saw pod success
Jan 13 18:33:55.622: INFO: Pod "pod-configmaps-e20a44dd-55cd-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:33:55.624: INFO: Trying to get logs from node hunter-control-plane pod pod-configmaps-e20a44dd-55cd-11eb-8355-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Jan 13 18:33:55.741: INFO: Waiting for pod pod-configmaps-e20a44dd-55cd-11eb-8355-0242ac110009 to disappear
Jan 13 18:33:55.750: INFO: Pod pod-configmaps-e20a44dd-55cd-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:33:55.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-kdl66" for this suite.
Jan 13 18:34:01.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:34:01.833: INFO: namespace: e2e-tests-configmap-kdl66, resource: bindings, ignored listing per whitelist
Jan 13 18:34:01.851: INFO: namespace e2e-tests-configmap-kdl66 deletion completed in 6.098810551s

• [SLOW TEST:10.615 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:34:01.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 13 18:34:02.022: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e85dfbfa-55cd-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-qpzhz" to be "success or failure"
Jan 13 18:34:02.026: INFO: Pod "downwardapi-volume-e85dfbfa-55cd-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.308365ms
Jan 13 18:34:04.030: INFO: Pod "downwardapi-volume-e85dfbfa-55cd-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007393784s
Jan 13 18:34:06.034: INFO: Pod "downwardapi-volume-e85dfbfa-55cd-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011727949s
STEP: Saw pod success
Jan 13 18:34:06.034: INFO: Pod "downwardapi-volume-e85dfbfa-55cd-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:34:06.037: INFO: Trying to get logs from node hunter-control-plane pod downwardapi-volume-e85dfbfa-55cd-11eb-8355-0242ac110009 container client-container: 
STEP: delete the pod
Jan 13 18:34:06.056: INFO: Waiting for pod downwardapi-volume-e85dfbfa-55cd-11eb-8355-0242ac110009 to disappear
Jan 13 18:34:06.060: INFO: Pod downwardapi-volume-e85dfbfa-55cd-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:34:06.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qpzhz" for this suite.
Jan 13 18:34:12.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:34:12.092: INFO: namespace: e2e-tests-projected-qpzhz, resource: bindings, ignored listing per whitelist
Jan 13 18:34:12.157: INFO: namespace e2e-tests-projected-qpzhz deletion completed in 6.093961556s

• [SLOW TEST:10.305 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:34:12.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 13 18:34:12.261: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee77ec4a-55cd-11eb-8355-0242ac110009" in namespace "e2e-tests-downward-api-g68gn" to be "success or failure"
Jan 13 18:34:12.264: INFO: Pod "downwardapi-volume-ee77ec4a-55cd-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.297396ms
Jan 13 18:34:14.269: INFO: Pod "downwardapi-volume-ee77ec4a-55cd-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007741789s
Jan 13 18:34:16.273: INFO: Pod "downwardapi-volume-ee77ec4a-55cd-11eb-8355-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.011727104s
Jan 13 18:34:18.277: INFO: Pod "downwardapi-volume-ee77ec4a-55cd-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015833759s
STEP: Saw pod success
Jan 13 18:34:18.277: INFO: Pod "downwardapi-volume-ee77ec4a-55cd-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:34:18.279: INFO: Trying to get logs from node hunter-control-plane pod downwardapi-volume-ee77ec4a-55cd-11eb-8355-0242ac110009 container client-container: 
STEP: delete the pod
Jan 13 18:34:18.296: INFO: Waiting for pod downwardapi-volume-ee77ec4a-55cd-11eb-8355-0242ac110009 to disappear
Jan 13 18:34:18.300: INFO: Pod downwardapi-volume-ee77ec4a-55cd-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:34:18.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-g68gn" for this suite.
Jan 13 18:34:24.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:34:24.437: INFO: namespace: e2e-tests-downward-api-g68gn, resource: bindings, ignored listing per whitelist
Jan 13 18:34:24.441: INFO: namespace e2e-tests-downward-api-g68gn deletion completed in 6.137855043s

• [SLOW TEST:12.284 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:34:24.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan 13 18:34:24.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:27.357: INFO: stderr: ""
Jan 13 18:34:27.357: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 13 18:34:27.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:27.463: INFO: stderr: ""
Jan 13 18:34:27.463: INFO: stdout: "update-demo-nautilus-67cnw update-demo-nautilus-mvj67 "
Jan 13 18:34:27.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-67cnw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:27.575: INFO: stderr: ""
Jan 13 18:34:27.575: INFO: stdout: ""
Jan 13 18:34:27.575: INFO: update-demo-nautilus-67cnw is created but not running
Jan 13 18:34:32.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:32.680: INFO: stderr: ""
Jan 13 18:34:32.680: INFO: stdout: "update-demo-nautilus-67cnw update-demo-nautilus-mvj67 "
Jan 13 18:34:32.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-67cnw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:32.773: INFO: stderr: ""
Jan 13 18:34:32.773: INFO: stdout: "true"
Jan 13 18:34:32.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-67cnw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:32.873: INFO: stderr: ""
Jan 13 18:34:32.873: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 13 18:34:32.873: INFO: validating pod update-demo-nautilus-67cnw
Jan 13 18:34:32.893: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 13 18:34:32.893: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 13 18:34:32.893: INFO: update-demo-nautilus-67cnw is verified up and running
Jan 13 18:34:32.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mvj67 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:32.991: INFO: stderr: ""
Jan 13 18:34:32.991: INFO: stdout: "true"
Jan 13 18:34:32.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mvj67 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:33.086: INFO: stderr: ""
Jan 13 18:34:33.086: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 13 18:34:33.086: INFO: validating pod update-demo-nautilus-mvj67
Jan 13 18:34:33.090: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 13 18:34:33.090: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 13 18:34:33.090: INFO: update-demo-nautilus-mvj67 is verified up and running
STEP: scaling down the replication controller
Jan 13 18:34:33.091: INFO: scanned /root for discovery docs: 
Jan 13 18:34:33.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:34.237: INFO: stderr: ""
Jan 13 18:34:34.237: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 13 18:34:34.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:34.347: INFO: stderr: ""
Jan 13 18:34:34.347: INFO: stdout: "update-demo-nautilus-67cnw update-demo-nautilus-mvj67 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 13 18:34:39.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:39.457: INFO: stderr: ""
Jan 13 18:34:39.457: INFO: stdout: "update-demo-nautilus-67cnw "
Jan 13 18:34:39.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-67cnw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:39.595: INFO: stderr: ""
Jan 13 18:34:39.595: INFO: stdout: "true"
Jan 13 18:34:39.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-67cnw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:39.694: INFO: stderr: ""
Jan 13 18:34:39.694: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 13 18:34:39.694: INFO: validating pod update-demo-nautilus-67cnw
Jan 13 18:34:39.698: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 13 18:34:39.698: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 13 18:34:39.698: INFO: update-demo-nautilus-67cnw is verified up and running
STEP: scaling up the replication controller
Jan 13 18:34:39.700: INFO: scanned /root for discovery docs: 
Jan 13 18:34:39.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:40.871: INFO: stderr: ""
Jan 13 18:34:40.871: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 13 18:34:40.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:40.961: INFO: stderr: ""
Jan 13 18:34:40.961: INFO: stdout: "update-demo-nautilus-67cnw update-demo-nautilus-bwr5k "
Jan 13 18:34:40.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-67cnw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:41.059: INFO: stderr: ""
Jan 13 18:34:41.059: INFO: stdout: "true"
Jan 13 18:34:41.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-67cnw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:41.146: INFO: stderr: ""
Jan 13 18:34:41.146: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 13 18:34:41.146: INFO: validating pod update-demo-nautilus-67cnw
Jan 13 18:34:41.149: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 13 18:34:41.149: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 13 18:34:41.149: INFO: update-demo-nautilus-67cnw is verified up and running
Jan 13 18:34:41.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bwr5k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:41.269: INFO: stderr: ""
Jan 13 18:34:41.269: INFO: stdout: ""
Jan 13 18:34:41.269: INFO: update-demo-nautilus-bwr5k is created but not running
Jan 13 18:34:46.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:46.377: INFO: stderr: ""
Jan 13 18:34:46.377: INFO: stdout: "update-demo-nautilus-67cnw update-demo-nautilus-bwr5k "
Jan 13 18:34:46.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-67cnw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:46.483: INFO: stderr: ""
Jan 13 18:34:46.483: INFO: stdout: "true"
Jan 13 18:34:46.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-67cnw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:46.598: INFO: stderr: ""
Jan 13 18:34:46.598: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 13 18:34:46.598: INFO: validating pod update-demo-nautilus-67cnw
Jan 13 18:34:46.602: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 13 18:34:46.602: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 13 18:34:46.602: INFO: update-demo-nautilus-67cnw is verified up and running
Jan 13 18:34:46.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bwr5k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:46.701: INFO: stderr: ""
Jan 13 18:34:46.701: INFO: stdout: "true"
Jan 13 18:34:46.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bwr5k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:46.805: INFO: stderr: ""
Jan 13 18:34:46.805: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 13 18:34:46.805: INFO: validating pod update-demo-nautilus-bwr5k
Jan 13 18:34:46.809: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 13 18:34:46.809: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 13 18:34:46.809: INFO: update-demo-nautilus-bwr5k is verified up and running
STEP: using delete to clean up resources
Jan 13 18:34:46.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:46.910: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 13 18:34:46.910: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 13 18:34:46.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-s9xc8'
Jan 13 18:34:47.025: INFO: stderr: "No resources found.\n"
Jan 13 18:34:47.025: INFO: stdout: ""
Jan 13 18:34:47.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-s9xc8 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 13 18:34:47.137: INFO: stderr: ""
Jan 13 18:34:47.137: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:34:47.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-s9xc8" for this suite.
Jan 13 18:34:53.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:34:53.182: INFO: namespace: e2e-tests-kubectl-s9xc8, resource: bindings, ignored listing per whitelist
Jan 13 18:34:53.265: INFO: namespace e2e-tests-kubectl-s9xc8 deletion completed in 6.12469458s

• [SLOW TEST:28.824 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:34:53.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 13 18:34:53.369: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:35:01.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-489ss" for this suite.
Jan 13 18:35:25.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:35:25.686: INFO: namespace: e2e-tests-init-container-489ss, resource: bindings, ignored listing per whitelist
Jan 13 18:35:25.783: INFO: namespace e2e-tests-init-container-489ss deletion completed in 24.214210011s

• [SLOW TEST:32.518 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:35:25.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:36:25.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-c652m" for this suite.
Jan 13 18:36:47.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:36:47.985: INFO: namespace: e2e-tests-container-probe-c652m, resource: bindings, ignored listing per whitelist
Jan 13 18:36:48.026: INFO: namespace e2e-tests-container-probe-c652m deletion completed in 22.107018078s

• [SLOW TEST:82.242 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:36:48.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 13 18:36:48.204: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 13 18:36:48.219: INFO: Waiting for terminating namespaces to be deleted...
Jan 13 18:36:48.223: INFO: 
Logging pods the kubelet thinks is on node hunter-control-plane before test
Jan 13 18:36:48.232: INFO: chaos-controller-manager-5c78c48d45-lgvrr from default started at 2021-01-11 06:43:21 +0000 UTC (1 container statuses recorded)
Jan 13 18:36:48.232: INFO: 	Container chaos-mesh ready: true, restart count 0
Jan 13 18:36:48.232: INFO: coredns-54ff9cd656-bt7q8 from kube-system started at 2021-01-10 17:37:35 +0000 UTC (1 container statuses recorded)
Jan 13 18:36:48.233: INFO: 	Container coredns ready: true, restart count 0
Jan 13 18:36:48.233: INFO: chaos-daemon-2shrz from default started at 2021-01-11 06:43:21 +0000 UTC (1 container statuses recorded)
Jan 13 18:36:48.233: INFO: 	Container chaos-daemon ready: true, restart count 0
Jan 13 18:36:48.233: INFO: kube-apiserver-hunter-control-plane from kube-system started at  (0 container statuses recorded)
Jan 13 18:36:48.233: INFO: kube-proxy-dqf89 from kube-system started at 2021-01-10 17:37:15 +0000 UTC (1 container statuses recorded)
Jan 13 18:36:48.233: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 13 18:36:48.233: INFO: kube-scheduler-hunter-control-plane from kube-system started at  (0 container statuses recorded)
Jan 13 18:36:48.233: INFO: kindnet-jwsht from kube-system started at 2021-01-10 17:37:15 +0000 UTC (1 container statuses recorded)
Jan 13 18:36:48.233: INFO: 	Container kindnet-cni ready: true, restart count 0
Jan 13 18:36:48.233: INFO: coredns-54ff9cd656-g95ns from kube-system started at 2021-01-10 17:37:34 +0000 UTC (1 container statuses recorded)
Jan 13 18:36:48.233: INFO: 	Container coredns ready: true, restart count 0
Jan 13 18:36:48.233: INFO: local-path-provisioner-65f5ddcc-jw6p2 from local-path-storage started at 2021-01-10 17:37:35 +0000 UTC (1 container statuses recorded)
Jan 13 18:36:48.233: INFO: 	Container local-path-provisioner ready: true, restart count 0
Jan 13 18:36:48.233: INFO: etcd-hunter-control-plane from kube-system started at  (0 container statuses recorded)
Jan 13 18:36:48.233: INFO: kube-controller-manager-hunter-control-plane from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-control-plane
Jan 13 18:36:48.306: INFO: Pod chaos-controller-manager-5c78c48d45-lgvrr requesting resource cpu=25m on Node hunter-control-plane
Jan 13 18:36:48.306: INFO: Pod chaos-daemon-2shrz requesting resource cpu=0m on Node hunter-control-plane
Jan 13 18:36:48.306: INFO: Pod coredns-54ff9cd656-bt7q8 requesting resource cpu=100m on Node hunter-control-plane
Jan 13 18:36:48.306: INFO: Pod coredns-54ff9cd656-g95ns requesting resource cpu=100m on Node hunter-control-plane
Jan 13 18:36:48.306: INFO: Pod etcd-hunter-control-plane requesting resource cpu=0m on Node hunter-control-plane
Jan 13 18:36:48.306: INFO: Pod kindnet-jwsht requesting resource cpu=100m on Node hunter-control-plane
Jan 13 18:36:48.306: INFO: Pod kube-apiserver-hunter-control-plane requesting resource cpu=250m on Node hunter-control-plane
Jan 13 18:36:48.306: INFO: Pod kube-controller-manager-hunter-control-plane requesting resource cpu=200m on Node hunter-control-plane
Jan 13 18:36:48.306: INFO: Pod kube-proxy-dqf89 requesting resource cpu=0m on Node hunter-control-plane
Jan 13 18:36:48.306: INFO: Pod kube-scheduler-hunter-control-plane requesting resource cpu=100m on Node hunter-control-plane
Jan 13 18:36:48.306: INFO: Pod local-path-provisioner-65f5ddcc-jw6p2 requesting resource cpu=0m on Node hunter-control-plane
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4b7d6353-55ce-11eb-8355-0242ac110009.1659de85df2cf7d0], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-tvrbv/filler-pod-4b7d6353-55ce-11eb-8355-0242ac110009 to hunter-control-plane]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4b7d6353-55ce-11eb-8355-0242ac110009.1659de862fc1ff64], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4b7d6353-55ce-11eb-8355-0242ac110009.1659de8670b3c70f], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4b7d6353-55ce-11eb-8355-0242ac110009.1659de867fb9d299], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.1659de86ce69c69c], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-control-plane
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:36:53.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-tvrbv" for this suite.
Jan 13 18:36:59.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:36:59.490: INFO: namespace: e2e-tests-sched-pred-tvrbv, resource: bindings, ignored listing per whitelist
Jan 13 18:36:59.515: INFO: namespace e2e-tests-sched-pred-tvrbv deletion completed in 6.127290692s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:11.488 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:36:59.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jan 13 18:36:59.645: INFO: Waiting up to 5m0s for pod "var-expansion-523e971c-55ce-11eb-8355-0242ac110009" in namespace "e2e-tests-var-expansion-w64tk" to be "success or failure"
Jan 13 18:36:59.656: INFO: Pod "var-expansion-523e971c-55ce-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 11.418898ms
Jan 13 18:37:01.660: INFO: Pod "var-expansion-523e971c-55ce-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015121346s
Jan 13 18:37:03.682: INFO: Pod "var-expansion-523e971c-55ce-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037337786s
STEP: Saw pod success
Jan 13 18:37:03.682: INFO: Pod "var-expansion-523e971c-55ce-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:37:03.685: INFO: Trying to get logs from node hunter-control-plane pod var-expansion-523e971c-55ce-11eb-8355-0242ac110009 container dapi-container: 
STEP: delete the pod
Jan 13 18:37:03.736: INFO: Waiting for pod var-expansion-523e971c-55ce-11eb-8355-0242ac110009 to disappear
Jan 13 18:37:03.770: INFO: Pod var-expansion-523e971c-55ce-11eb-8355-0242ac110009 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:37:03.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-w64tk" for this suite.
Jan 13 18:37:09.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:37:09.829: INFO: namespace: e2e-tests-var-expansion-w64tk, resource: bindings, ignored listing per whitelist
Jan 13 18:37:09.910: INFO: namespace e2e-tests-var-expansion-w64tk deletion completed in 6.136003865s

• [SLOW TEST:10.395 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:37:09.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 13 18:37:10.053: INFO: (0) /api/v1/nodes/hunter-control-plane/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 13 18:37:16.386: INFO: Waiting up to 5m0s for pod "pod-5c37258c-55ce-11eb-8355-0242ac110009" in namespace "e2e-tests-emptydir-59bzn" to be "success or failure"
Jan 13 18:37:16.389: INFO: Pod "pod-5c37258c-55ce-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.96711ms
Jan 13 18:37:18.421: INFO: Pod "pod-5c37258c-55ce-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034359368s
Jan 13 18:37:20.425: INFO: Pod "pod-5c37258c-55ce-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038485129s
STEP: Saw pod success
Jan 13 18:37:20.425: INFO: Pod "pod-5c37258c-55ce-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:37:20.428: INFO: Trying to get logs from node hunter-control-plane pod pod-5c37258c-55ce-11eb-8355-0242ac110009 container test-container: 
STEP: delete the pod
Jan 13 18:37:20.466: INFO: Waiting for pod pod-5c37258c-55ce-11eb-8355-0242ac110009 to disappear
Jan 13 18:37:20.474: INFO: Pod pod-5c37258c-55ce-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:37:20.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-59bzn" for this suite.
Jan 13 18:37:26.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:37:26.541: INFO: namespace: e2e-tests-emptydir-59bzn, resource: bindings, ignored listing per whitelist
Jan 13 18:37:26.621: INFO: namespace e2e-tests-emptydir-59bzn deletion completed in 6.143676847s

• [SLOW TEST:10.353 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:37:26.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 13 18:37:26.737: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6263537c-55ce-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-wrsjp" to be "success or failure"
Jan 13 18:37:26.741: INFO: Pod "downwardapi-volume-6263537c-55ce-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.824199ms
Jan 13 18:37:28.768: INFO: Pod "downwardapi-volume-6263537c-55ce-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031074108s
Jan 13 18:37:30.772: INFO: Pod "downwardapi-volume-6263537c-55ce-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035535095s
STEP: Saw pod success
Jan 13 18:37:30.773: INFO: Pod "downwardapi-volume-6263537c-55ce-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:37:30.776: INFO: Trying to get logs from node hunter-control-plane pod downwardapi-volume-6263537c-55ce-11eb-8355-0242ac110009 container client-container: 
STEP: delete the pod
Jan 13 18:37:30.795: INFO: Waiting for pod downwardapi-volume-6263537c-55ce-11eb-8355-0242ac110009 to disappear
Jan 13 18:37:30.800: INFO: Pod downwardapi-volume-6263537c-55ce-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:37:30.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wrsjp" for this suite.
Jan 13 18:37:36.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:37:36.877: INFO: namespace: e2e-tests-projected-wrsjp, resource: bindings, ignored listing per whitelist
Jan 13 18:37:36.897: INFO: namespace e2e-tests-projected-wrsjp deletion completed in 6.094531441s

• [SLOW TEST:10.276 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:37:36.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 13 18:37:37.044: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 13 18:37:37.070: INFO: Number of nodes with available pods: 0
Jan 13 18:37:37.070: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 13 18:37:37.164: INFO: Number of nodes with available pods: 0
Jan 13 18:37:37.164: INFO: Node hunter-control-plane is running more than one daemon pod
Jan 13 18:37:38.169: INFO: Number of nodes with available pods: 0
Jan 13 18:37:38.169: INFO: Node hunter-control-plane is running more than one daemon pod
Jan 13 18:37:39.169: INFO: Number of nodes with available pods: 0
Jan 13 18:37:39.169: INFO: Node hunter-control-plane is running more than one daemon pod
Jan 13 18:37:40.169: INFO: Number of nodes with available pods: 1
Jan 13 18:37:40.169: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 13 18:37:40.210: INFO: Number of nodes with available pods: 1
Jan 13 18:37:40.210: INFO: Number of running nodes: 0, number of available pods: 1
Jan 13 18:37:41.215: INFO: Number of nodes with available pods: 0
Jan 13 18:37:41.215: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 13 18:37:41.229: INFO: Number of nodes with available pods: 0
Jan 13 18:37:41.229: INFO: Node hunter-control-plane is running more than one daemon pod
Jan 13 18:37:42.233: INFO: Number of nodes with available pods: 0
Jan 13 18:37:42.233: INFO: Node hunter-control-plane is running more than one daemon pod
Jan 13 18:37:43.234: INFO: Number of nodes with available pods: 0
Jan 13 18:37:43.234: INFO: Node hunter-control-plane is running more than one daemon pod
Jan 13 18:37:44.234: INFO: Number of nodes with available pods: 0
Jan 13 18:37:44.234: INFO: Node hunter-control-plane is running more than one daemon pod
Jan 13 18:37:45.234: INFO: Number of nodes with available pods: 0
Jan 13 18:37:45.234: INFO: Node hunter-control-plane is running more than one daemon pod
Jan 13 18:37:46.235: INFO: Number of nodes with available pods: 0
Jan 13 18:37:46.235: INFO: Node hunter-control-plane is running more than one daemon pod
Jan 13 18:37:47.233: INFO: Number of nodes with available pods: 0
Jan 13 18:37:47.233: INFO: Node hunter-control-plane is running more than one daemon pod
Jan 13 18:37:48.234: INFO: Number of nodes with available pods: 0
Jan 13 18:37:48.234: INFO: Node hunter-control-plane is running more than one daemon pod
Jan 13 18:37:49.233: INFO: Number of nodes with available pods: 0
Jan 13 18:37:49.233: INFO: Node hunter-control-plane is running more than one daemon pod
Jan 13 18:37:50.233: INFO: Number of nodes with available pods: 0
Jan 13 18:37:50.234: INFO: Node hunter-control-plane is running more than one daemon pod
Jan 13 18:37:51.233: INFO: Number of nodes with available pods: 0
Jan 13 18:37:51.233: INFO: Node hunter-control-plane is running more than one daemon pod
Jan 13 18:37:52.233: INFO: Number of nodes with available pods: 1
Jan 13 18:37:52.233: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-tt24g, will wait for the garbage collector to delete the pods
Jan 13 18:37:52.299: INFO: Deleting DaemonSet.extensions daemon-set took: 6.365845ms
Jan 13 18:37:52.399: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.276869ms
Jan 13 18:37:59.102: INFO: Number of nodes with available pods: 0
Jan 13 18:37:59.102: INFO: Number of running nodes: 0, number of available pods: 0
Jan 13 18:37:59.105: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-tt24g/daemonsets","resourceVersion":"498454"},"items":null}

Jan 13 18:37:59.107: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-tt24g/pods","resourceVersion":"498454"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:37:59.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-tt24g" for this suite.
Jan 13 18:38:05.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:38:05.223: INFO: namespace: e2e-tests-daemonsets-tt24g, resource: bindings, ignored listing per whitelist
Jan 13 18:38:05.275: INFO: namespace e2e-tests-daemonsets-tt24g deletion completed in 6.134188004s

• [SLOW TEST:28.378 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:38:05.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:38:09.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-c54p5" for this suite.
Jan 13 18:38:47.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:38:47.497: INFO: namespace: e2e-tests-kubelet-test-c54p5, resource: bindings, ignored listing per whitelist
Jan 13 18:38:47.514: INFO: namespace e2e-tests-kubelet-test-c54p5 deletion completed in 38.115071154s

• [SLOW TEST:42.239 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:38:47.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 13 18:38:47.657: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:38:53.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-pzjmt" for this suite.
Jan 13 18:38:59.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:38:59.180: INFO: namespace: e2e-tests-init-container-pzjmt, resource: bindings, ignored listing per whitelist
Jan 13 18:38:59.237: INFO: namespace e2e-tests-init-container-pzjmt deletion completed in 6.123378896s

• [SLOW TEST:11.722 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:38:59.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 13 18:38:59.342: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 13 18:38:59.360: INFO: Waiting for terminating namespaces to be deleted...
Jan 13 18:38:59.363: INFO: 
Logging pods the kubelet thinks is on node hunter-control-plane before test
Jan 13 18:38:59.371: INFO: etcd-hunter-control-plane from kube-system started at  (0 container statuses recorded)
Jan 13 18:38:59.372: INFO: kube-controller-manager-hunter-control-plane from kube-system started at  (0 container statuses recorded)
Jan 13 18:38:59.372: INFO: chaos-controller-manager-5c78c48d45-lgvrr from default started at 2021-01-11 06:43:21 +0000 UTC (1 container statuses recorded)
Jan 13 18:38:59.372: INFO: 	Container chaos-mesh ready: true, restart count 0
Jan 13 18:38:59.372: INFO: coredns-54ff9cd656-bt7q8 from kube-system started at 2021-01-10 17:37:35 +0000 UTC (1 container statuses recorded)
Jan 13 18:38:59.372: INFO: 	Container coredns ready: true, restart count 0
Jan 13 18:38:59.372: INFO: chaos-daemon-2shrz from default started at 2021-01-11 06:43:21 +0000 UTC (1 container statuses recorded)
Jan 13 18:38:59.372: INFO: 	Container chaos-daemon ready: true, restart count 0
Jan 13 18:38:59.372: INFO: kube-apiserver-hunter-control-plane from kube-system started at  (0 container statuses recorded)
Jan 13 18:38:59.372: INFO: kube-proxy-dqf89 from kube-system started at 2021-01-10 17:37:15 +0000 UTC (1 container statuses recorded)
Jan 13 18:38:59.372: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 13 18:38:59.372: INFO: kube-scheduler-hunter-control-plane from kube-system started at  (0 container statuses recorded)
Jan 13 18:38:59.372: INFO: kindnet-jwsht from kube-system started at 2021-01-10 17:37:15 +0000 UTC (1 container statuses recorded)
Jan 13 18:38:59.372: INFO: 	Container kindnet-cni ready: true, restart count 0
Jan 13 18:38:59.372: INFO: coredns-54ff9cd656-g95ns from kube-system started at 2021-01-10 17:37:34 +0000 UTC (1 container statuses recorded)
Jan 13 18:38:59.372: INFO: 	Container coredns ready: true, restart count 0
Jan 13 18:38:59.372: INFO: local-path-provisioner-65f5ddcc-jw6p2 from local-path-storage started at 2021-01-10 17:37:35 +0000 UTC (1 container statuses recorded)
Jan 13 18:38:59.372: INFO: 	Container local-path-provisioner ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.1659dea4638b2631], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:39:00.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-cbxq5" for this suite.
Jan 13 18:39:06.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:39:06.418: INFO: namespace: e2e-tests-sched-pred-cbxq5, resource: bindings, ignored listing per whitelist
Jan 13 18:39:06.502: INFO: namespace e2e-tests-sched-pred-cbxq5 deletion completed in 6.107181093s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.265 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:39:06.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jan 13 18:39:06.639: INFO: Waiting up to 5m0s for pod "client-containers-9debeffe-55ce-11eb-8355-0242ac110009" in namespace "e2e-tests-containers-7kmx9" to be "success or failure"
Jan 13 18:39:06.656: INFO: Pod "client-containers-9debeffe-55ce-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.700269ms
Jan 13 18:39:08.660: INFO: Pod "client-containers-9debeffe-55ce-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020793158s
Jan 13 18:39:10.665: INFO: Pod "client-containers-9debeffe-55ce-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025537886s
STEP: Saw pod success
Jan 13 18:39:10.665: INFO: Pod "client-containers-9debeffe-55ce-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:39:10.668: INFO: Trying to get logs from node hunter-control-plane pod client-containers-9debeffe-55ce-11eb-8355-0242ac110009 container test-container: 
STEP: delete the pod
Jan 13 18:39:10.688: INFO: Waiting for pod client-containers-9debeffe-55ce-11eb-8355-0242ac110009 to disappear
Jan 13 18:39:10.756: INFO: Pod client-containers-9debeffe-55ce-11eb-8355-0242ac110009 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:39:10.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-7kmx9" for this suite.
Jan 13 18:39:16.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:39:16.833: INFO: namespace: e2e-tests-containers-7kmx9, resource: bindings, ignored listing per whitelist
Jan 13 18:39:16.912: INFO: namespace e2e-tests-containers-7kmx9 deletion completed in 6.152469708s

• [SLOW TEST:10.410 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:39:16.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-a43af527-55ce-11eb-8355-0242ac110009
STEP: Creating a pod to test consume secrets
Jan 13 18:39:17.333: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a43dc782-55ce-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-w8s47" to be "success or failure"
Jan 13 18:39:17.343: INFO: Pod "pod-projected-secrets-a43dc782-55ce-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 9.723614ms
Jan 13 18:39:19.346: INFO: Pod "pod-projected-secrets-a43dc782-55ce-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013209825s
Jan 13 18:39:21.351: INFO: Pod "pod-projected-secrets-a43dc782-55ce-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017392334s
STEP: Saw pod success
Jan 13 18:39:21.351: INFO: Pod "pod-projected-secrets-a43dc782-55ce-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:39:21.354: INFO: Trying to get logs from node hunter-control-plane pod pod-projected-secrets-a43dc782-55ce-11eb-8355-0242ac110009 container projected-secret-volume-test: 
STEP: delete the pod
Jan 13 18:39:21.550: INFO: Waiting for pod pod-projected-secrets-a43dc782-55ce-11eb-8355-0242ac110009 to disappear
Jan 13 18:39:21.575: INFO: Pod pod-projected-secrets-a43dc782-55ce-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:39:21.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-w8s47" for this suite.
Jan 13 18:39:27.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:39:27.676: INFO: namespace: e2e-tests-projected-w8s47, resource: bindings, ignored listing per whitelist
Jan 13 18:39:27.747: INFO: namespace e2e-tests-projected-w8s47 deletion completed in 6.168566255s

• [SLOW TEST:10.835 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:39:27.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 13 18:39:28.046: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9rd8t,SelfLink:/api/v1/namespaces/e2e-tests-watch-9rd8t/configmaps/e2e-watch-test-label-changed,UID:aaa37b39-55ce-11eb-9c75-0242ac12000b,ResourceVersion:498750,Generation:0,CreationTimestamp:2021-01-13 18:39:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 13 18:39:28.046: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9rd8t,SelfLink:/api/v1/namespaces/e2e-tests-watch-9rd8t/configmaps/e2e-watch-test-label-changed,UID:aaa37b39-55ce-11eb-9c75-0242ac12000b,ResourceVersion:498751,Generation:0,CreationTimestamp:2021-01-13 18:39:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 13 18:39:28.046: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9rd8t,SelfLink:/api/v1/namespaces/e2e-tests-watch-9rd8t/configmaps/e2e-watch-test-label-changed,UID:aaa37b39-55ce-11eb-9c75-0242ac12000b,ResourceVersion:498752,Generation:0,CreationTimestamp:2021-01-13 18:39:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 13 18:39:38.132: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9rd8t,SelfLink:/api/v1/namespaces/e2e-tests-watch-9rd8t/configmaps/e2e-watch-test-label-changed,UID:aaa37b39-55ce-11eb-9c75-0242ac12000b,ResourceVersion:498771,Generation:0,CreationTimestamp:2021-01-13 18:39:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 13 18:39:38.132: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9rd8t,SelfLink:/api/v1/namespaces/e2e-tests-watch-9rd8t/configmaps/e2e-watch-test-label-changed,UID:aaa37b39-55ce-11eb-9c75-0242ac12000b,ResourceVersion:498772,Generation:0,CreationTimestamp:2021-01-13 18:39:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 13 18:39:38.132: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9rd8t,SelfLink:/api/v1/namespaces/e2e-tests-watch-9rd8t/configmaps/e2e-watch-test-label-changed,UID:aaa37b39-55ce-11eb-9c75-0242ac12000b,ResourceVersion:498773,Generation:0,CreationTimestamp:2021-01-13 18:39:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:39:38.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-9rd8t" for this suite.
Jan 13 18:39:44.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:39:44.188: INFO: namespace: e2e-tests-watch-9rd8t, resource: bindings, ignored listing per whitelist
Jan 13 18:39:44.228: INFO: namespace e2e-tests-watch-9rd8t deletion completed in 6.091436124s

• [SLOW TEST:16.481 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:39:44.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-22j75
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-22j75 to expose endpoints map[]
Jan 13 18:39:44.399: INFO: Get endpoints failed (16.854464ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan 13 18:39:45.403: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-22j75 exposes endpoints map[] (1.020356395s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-22j75
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-22j75 to expose endpoints map[pod1:[100]]
Jan 13 18:39:48.498: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-22j75 exposes endpoints map[pod1:[100]] (3.088854835s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-22j75
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-22j75 to expose endpoints map[pod1:[100] pod2:[101]]
Jan 13 18:39:51.591: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-22j75 exposes endpoints map[pod1:[100] pod2:[101]] (3.089422395s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-22j75
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-22j75 to expose endpoints map[pod2:[101]]
Jan 13 18:39:52.619: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-22j75 exposes endpoints map[pod2:[101]] (1.023276166s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-22j75
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-22j75 to expose endpoints map[]
Jan 13 18:39:53.658: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-22j75 exposes endpoints map[] (1.033634992s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:39:53.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-22j75" for this suite.
Jan 13 18:40:15.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:40:15.835: INFO: namespace: e2e-tests-services-22j75, resource: bindings, ignored listing per whitelist
Jan 13 18:40:15.866: INFO: namespace e2e-tests-services-22j75 deletion completed in 22.127650405s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:31.638 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:40:15.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 13 18:40:15.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-qmz24'
Jan 13 18:40:16.177: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 13 18:40:16.177: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jan 13 18:40:20.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-qmz24'
Jan 13 18:40:20.321: INFO: stderr: ""
Jan 13 18:40:20.321: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:40:20.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qmz24" for this suite.
Jan 13 18:40:42.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:40:42.438: INFO: namespace: e2e-tests-kubectl-qmz24, resource: bindings, ignored listing per whitelist
Jan 13 18:40:42.441: INFO: namespace e2e-tests-kubectl-qmz24 deletion completed in 22.116869998s

• [SLOW TEST:26.573 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:40:42.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 13 18:40:42.575: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d71ef83e-55ce-11eb-8355-0242ac110009" in namespace "e2e-tests-downward-api-gng9t" to be "success or failure"
Jan 13 18:40:42.588: INFO: Pod "downwardapi-volume-d71ef83e-55ce-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 13.620513ms
Jan 13 18:40:44.593: INFO: Pod "downwardapi-volume-d71ef83e-55ce-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018421413s
Jan 13 18:40:46.597: INFO: Pod "downwardapi-volume-d71ef83e-55ce-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022671016s
STEP: Saw pod success
Jan 13 18:40:46.597: INFO: Pod "downwardapi-volume-d71ef83e-55ce-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:40:46.601: INFO: Trying to get logs from node hunter-control-plane pod downwardapi-volume-d71ef83e-55ce-11eb-8355-0242ac110009 container client-container: 
STEP: delete the pod
Jan 13 18:40:46.729: INFO: Waiting for pod downwardapi-volume-d71ef83e-55ce-11eb-8355-0242ac110009 to disappear
Jan 13 18:40:46.800: INFO: Pod downwardapi-volume-d71ef83e-55ce-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:40:46.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gng9t" for this suite.
Jan 13 18:40:52.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:40:52.861: INFO: namespace: e2e-tests-downward-api-gng9t, resource: bindings, ignored listing per whitelist
Jan 13 18:40:52.899: INFO: namespace e2e-tests-downward-api-gng9t deletion completed in 6.096805938s

• [SLOW TEST:10.458 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:40:52.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-cw9fp
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 13 18:40:52.979: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 13 18:41:19.185: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.0.9 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-cw9fp PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 13 18:41:19.185: INFO: >>> kubeConfig: /root/.kube/config
I0113 18:41:19.223504       6 log.go:172] (0xc0011fa580) (0xc000acefa0) Create stream
I0113 18:41:19.223545       6 log.go:172] (0xc0011fa580) (0xc000acefa0) Stream added, broadcasting: 1
I0113 18:41:19.226730       6 log.go:172] (0xc0011fa580) Reply frame received for 1
I0113 18:41:19.226764       6 log.go:172] (0xc0011fa580) (0xc0019ff4a0) Create stream
I0113 18:41:19.226776       6 log.go:172] (0xc0011fa580) (0xc0019ff4a0) Stream added, broadcasting: 3
I0113 18:41:19.227703       6 log.go:172] (0xc0011fa580) Reply frame received for 3
I0113 18:41:19.227769       6 log.go:172] (0xc0011fa580) (0xc00183c1e0) Create stream
I0113 18:41:19.227786       6 log.go:172] (0xc0011fa580) (0xc00183c1e0) Stream added, broadcasting: 5
I0113 18:41:19.228615       6 log.go:172] (0xc0011fa580) Reply frame received for 5
I0113 18:41:20.350878       6 log.go:172] (0xc0011fa580) Data frame received for 3
I0113 18:41:20.350912       6 log.go:172] (0xc0019ff4a0) (3) Data frame handling
I0113 18:41:20.350996       6 log.go:172] (0xc0019ff4a0) (3) Data frame sent
I0113 18:41:20.351024       6 log.go:172] (0xc0011fa580) Data frame received for 3
I0113 18:41:20.351033       6 log.go:172] (0xc0019ff4a0) (3) Data frame handling
I0113 18:41:20.351118       6 log.go:172] (0xc0011fa580) Data frame received for 5
I0113 18:41:20.351149       6 log.go:172] (0xc00183c1e0) (5) Data frame handling
I0113 18:41:20.353504       6 log.go:172] (0xc0011fa580) Data frame received for 1
I0113 18:41:20.353539       6 log.go:172] (0xc000acefa0) (1) Data frame handling
I0113 18:41:20.353570       6 log.go:172] (0xc000acefa0) (1) Data frame sent
I0113 18:41:20.353605       6 log.go:172] (0xc0011fa580) (0xc000acefa0) Stream removed, broadcasting: 1
I0113 18:41:20.353643       6 log.go:172] (0xc0011fa580) Go away received
I0113 18:41:20.353769       6 log.go:172] (0xc0011fa580) (0xc000acefa0) Stream removed, broadcasting: 1
I0113 18:41:20.353820       6 log.go:172] (0xc0011fa580) (0xc0019ff4a0) Stream removed, broadcasting: 3
I0113 18:41:20.353843       6 log.go:172] (0xc0011fa580) (0xc00183c1e0) Stream removed, broadcasting: 5
Jan 13 18:41:20.353: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:41:20.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-cw9fp" for this suite.
Jan 13 18:41:42.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:41:42.384: INFO: namespace: e2e-tests-pod-network-test-cw9fp, resource: bindings, ignored listing per whitelist
Jan 13 18:41:42.464: INFO: namespace e2e-tests-pod-network-test-cw9fp deletion completed in 22.107249561s

• [SLOW TEST:49.565 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:41:42.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 13 18:41:50.721: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 13 18:41:50.738: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 13 18:41:52.739: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 13 18:41:52.742: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 13 18:41:54.739: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 13 18:41:54.743: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 13 18:41:56.739: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 13 18:41:56.743: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 13 18:41:58.739: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 13 18:41:58.742: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 13 18:42:00.739: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 13 18:42:00.742: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 13 18:42:02.739: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 13 18:42:02.743: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 13 18:42:04.739: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 13 18:42:04.742: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 13 18:42:06.739: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 13 18:42:06.743: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 13 18:42:08.739: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 13 18:42:08.743: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 13 18:42:10.739: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 13 18:42:10.742: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 13 18:42:12.739: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 13 18:42:12.753: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 13 18:42:14.739: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 13 18:42:14.743: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 13 18:42:16.739: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 13 18:42:16.743: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 13 18:42:18.739: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 13 18:42:18.743: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 13 18:42:20.739: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 13 18:42:20.744: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:42:20.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-qc87v" for this suite.
Jan 13 18:42:42.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:42:42.779: INFO: namespace: e2e-tests-container-lifecycle-hook-qc87v, resource: bindings, ignored listing per whitelist
Jan 13 18:42:42.849: INFO: namespace e2e-tests-container-lifecycle-hook-qc87v deletion completed in 22.101179951s

• [SLOW TEST:60.384 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:42:42.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-1ee9964d-55cf-11eb-8355-0242ac110009
STEP: Creating configMap with name cm-test-opt-upd-1ee996a7-55cf-11eb-8355-0242ac110009
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-1ee9964d-55cf-11eb-8355-0242ac110009
STEP: Updating configmap cm-test-opt-upd-1ee996a7-55cf-11eb-8355-0242ac110009
STEP: Creating configMap with name cm-test-opt-create-1ee996d7-55cf-11eb-8355-0242ac110009
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:42:53.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hvjb2" for this suite.
Jan 13 18:43:17.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:43:17.267: INFO: namespace: e2e-tests-projected-hvjb2, resource: bindings, ignored listing per whitelist
Jan 13 18:43:17.342: INFO: namespace e2e-tests-projected-hvjb2 deletion completed in 24.108603899s

• [SLOW TEST:34.493 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:43:17.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0113 18:43:18.526976       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 13 18:43:18.527: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:43:18.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-clb2h" for this suite.
Jan 13 18:43:24.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:43:24.607: INFO: namespace: e2e-tests-gc-clb2h, resource: bindings, ignored listing per whitelist
Jan 13 18:43:24.635: INFO: namespace e2e-tests-gc-clb2h deletion completed in 6.105349274s

• [SLOW TEST:7.293 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:43:24.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-tw84
STEP: Creating a pod to test atomic-volume-subpath
Jan 13 18:43:24.784: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-tw84" in namespace "e2e-tests-subpath-q5knp" to be "success or failure"
Jan 13 18:43:24.789: INFO: Pod "pod-subpath-test-projected-tw84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.647552ms
Jan 13 18:43:26.855: INFO: Pod "pod-subpath-test-projected-tw84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071229699s
Jan 13 18:43:28.860: INFO: Pod "pod-subpath-test-projected-tw84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075334944s
Jan 13 18:43:30.864: INFO: Pod "pod-subpath-test-projected-tw84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079556974s
Jan 13 18:43:32.868: INFO: Pod "pod-subpath-test-projected-tw84": Phase="Running", Reason="", readiness=false. Elapsed: 8.083308604s
Jan 13 18:43:34.871: INFO: Pod "pod-subpath-test-projected-tw84": Phase="Running", Reason="", readiness=false. Elapsed: 10.086932239s
Jan 13 18:43:36.875: INFO: Pod "pod-subpath-test-projected-tw84": Phase="Running", Reason="", readiness=false. Elapsed: 12.090752103s
Jan 13 18:43:38.878: INFO: Pod "pod-subpath-test-projected-tw84": Phase="Running", Reason="", readiness=false. Elapsed: 14.094247361s
Jan 13 18:43:40.883: INFO: Pod "pod-subpath-test-projected-tw84": Phase="Running", Reason="", readiness=false. Elapsed: 16.098884954s
Jan 13 18:43:42.888: INFO: Pod "pod-subpath-test-projected-tw84": Phase="Running", Reason="", readiness=false. Elapsed: 18.103749811s
Jan 13 18:43:44.893: INFO: Pod "pod-subpath-test-projected-tw84": Phase="Running", Reason="", readiness=false. Elapsed: 20.108575093s
Jan 13 18:43:46.897: INFO: Pod "pod-subpath-test-projected-tw84": Phase="Running", Reason="", readiness=false. Elapsed: 22.112384471s
Jan 13 18:43:48.901: INFO: Pod "pod-subpath-test-projected-tw84": Phase="Running", Reason="", readiness=false. Elapsed: 24.116821479s
Jan 13 18:43:50.906: INFO: Pod "pod-subpath-test-projected-tw84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.121600942s
STEP: Saw pod success
Jan 13 18:43:50.906: INFO: Pod "pod-subpath-test-projected-tw84" satisfied condition "success or failure"
Jan 13 18:43:50.910: INFO: Trying to get logs from node hunter-control-plane pod pod-subpath-test-projected-tw84 container test-container-subpath-projected-tw84: 
STEP: delete the pod
Jan 13 18:43:50.951: INFO: Waiting for pod pod-subpath-test-projected-tw84 to disappear
Jan 13 18:43:51.029: INFO: Pod pod-subpath-test-projected-tw84 no longer exists
STEP: Deleting pod pod-subpath-test-projected-tw84
Jan 13 18:43:51.029: INFO: Deleting pod "pod-subpath-test-projected-tw84" in namespace "e2e-tests-subpath-q5knp"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:43:51.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-q5knp" for this suite.
Jan 13 18:43:57.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:43:57.241: INFO: namespace: e2e-tests-subpath-q5knp, resource: bindings, ignored listing per whitelist
Jan 13 18:43:57.271: INFO: namespace e2e-tests-subpath-q5knp deletion completed in 6.173286656s

• [SLOW TEST:32.636 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:43:57.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-4b4304a0-55cf-11eb-8355-0242ac110009
STEP: Creating a pod to test consume secrets
Jan 13 18:43:57.437: INFO: Waiting up to 5m0s for pod "pod-secrets-4b437b78-55cf-11eb-8355-0242ac110009" in namespace "e2e-tests-secrets-xhdqz" to be "success or failure"
Jan 13 18:43:57.447: INFO: Pod "pod-secrets-4b437b78-55cf-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 10.006742ms
Jan 13 18:43:59.451: INFO: Pod "pod-secrets-4b437b78-55cf-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013679015s
Jan 13 18:44:01.456: INFO: Pod "pod-secrets-4b437b78-55cf-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018449967s
STEP: Saw pod success
Jan 13 18:44:01.456: INFO: Pod "pod-secrets-4b437b78-55cf-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:44:01.459: INFO: Trying to get logs from node hunter-control-plane pod pod-secrets-4b437b78-55cf-11eb-8355-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Jan 13 18:44:01.517: INFO: Waiting for pod pod-secrets-4b437b78-55cf-11eb-8355-0242ac110009 to disappear
Jan 13 18:44:01.531: INFO: Pod pod-secrets-4b437b78-55cf-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:44:01.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-xhdqz" for this suite.
Jan 13 18:44:07.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:44:07.591: INFO: namespace: e2e-tests-secrets-xhdqz, resource: bindings, ignored listing per whitelist
Jan 13 18:44:07.639: INFO: namespace e2e-tests-secrets-xhdqz deletion completed in 6.105147745s

• [SLOW TEST:10.368 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:44:07.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 13 18:44:07.792: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jan 13 18:44:07.798: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-hkdhc/daemonsets","resourceVersion":"499603"},"items":null}

Jan 13 18:44:07.800: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-hkdhc/pods","resourceVersion":"499603"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:44:07.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-hkdhc" for this suite.
Jan 13 18:44:13.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:44:13.848: INFO: namespace: e2e-tests-daemonsets-hkdhc, resource: bindings, ignored listing per whitelist
Jan 13 18:44:13.913: INFO: namespace e2e-tests-daemonsets-hkdhc deletion completed in 6.104519053s

S [SKIPPING] [6.274 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jan 13 18:44:07.792: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:44:13.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0113 18:44:44.540142       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 13 18:44:44.540: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:44:44.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-cqhmr" for this suite.
Jan 13 18:44:50.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:44:50.653: INFO: namespace: e2e-tests-gc-cqhmr, resource: bindings, ignored listing per whitelist
Jan 13 18:44:50.695: INFO: namespace e2e-tests-gc-cqhmr deletion completed in 6.153057127s

• [SLOW TEST:36.782 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:44:50.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 13 18:44:58.142: INFO: 0 pods remaining
Jan 13 18:44:58.142: INFO: 0 pods has nil DeletionTimestamp
Jan 13 18:44:58.142: INFO: 
STEP: Gathering metrics
W0113 18:44:59.137373       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 13 18:44:59.137: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:44:59.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-5mwvp" for this suite.
Jan 13 18:45:05.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:45:05.207: INFO: namespace: e2e-tests-gc-5mwvp, resource: bindings, ignored listing per whitelist
Jan 13 18:45:05.253: INFO: namespace e2e-tests-gc-5mwvp deletion completed in 6.114118464s

• [SLOW TEST:14.558 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:45:05.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 13 18:45:05.392: INFO: Creating ReplicaSet my-hostname-basic-73c6a046-55cf-11eb-8355-0242ac110009
Jan 13 18:45:05.401: INFO: Pod name my-hostname-basic-73c6a046-55cf-11eb-8355-0242ac110009: Found 0 pods out of 1
Jan 13 18:45:10.406: INFO: Pod name my-hostname-basic-73c6a046-55cf-11eb-8355-0242ac110009: Found 1 pods out of 1
Jan 13 18:45:10.406: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-73c6a046-55cf-11eb-8355-0242ac110009" is running
Jan 13 18:45:10.409: INFO: Pod "my-hostname-basic-73c6a046-55cf-11eb-8355-0242ac110009-wcc8d" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-13 18:45:05 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-13 18:45:08 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-13 18:45:08 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-13 18:45:05 +0000 UTC Reason: Message:}])
Jan 13 18:45:10.409: INFO: Trying to dial the pod
Jan 13 18:45:15.427: INFO: Controller my-hostname-basic-73c6a046-55cf-11eb-8355-0242ac110009: Got expected result from replica 1 [my-hostname-basic-73c6a046-55cf-11eb-8355-0242ac110009-wcc8d]: "my-hostname-basic-73c6a046-55cf-11eb-8355-0242ac110009-wcc8d", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:45:15.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-2n2ln" for this suite.
Jan 13 18:45:21.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:45:21.480: INFO: namespace: e2e-tests-replicaset-2n2ln, resource: bindings, ignored listing per whitelist
Jan 13 18:45:21.540: INFO: namespace e2e-tests-replicaset-2n2ln deletion completed in 6.110086577s

• [SLOW TEST:16.286 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:45:21.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-7d7d9654-55cf-11eb-8355-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan 13 18:45:21.732: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7d7e2c16-55cf-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-ldd9p" to be "success or failure"
Jan 13 18:45:21.743: INFO: Pod "pod-projected-configmaps-7d7e2c16-55cf-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 10.328153ms
Jan 13 18:45:23.746: INFO: Pod "pod-projected-configmaps-7d7e2c16-55cf-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014134181s
Jan 13 18:45:25.750: INFO: Pod "pod-projected-configmaps-7d7e2c16-55cf-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017821222s
STEP: Saw pod success
Jan 13 18:45:25.750: INFO: Pod "pod-projected-configmaps-7d7e2c16-55cf-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:45:25.753: INFO: Trying to get logs from node hunter-control-plane pod pod-projected-configmaps-7d7e2c16-55cf-11eb-8355-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 13 18:45:25.984: INFO: Waiting for pod pod-projected-configmaps-7d7e2c16-55cf-11eb-8355-0242ac110009 to disappear
Jan 13 18:45:26.018: INFO: Pod pod-projected-configmaps-7d7e2c16-55cf-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:45:26.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ldd9p" for this suite.
Jan 13 18:45:32.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:45:32.070: INFO: namespace: e2e-tests-projected-ldd9p, resource: bindings, ignored listing per whitelist
Jan 13 18:45:32.116: INFO: namespace e2e-tests-projected-ldd9p deletion completed in 6.09391695s

• [SLOW TEST:10.576 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:45:32.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:45:36.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-kfdck" for this suite.
Jan 13 18:46:14.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:46:14.336: INFO: namespace: e2e-tests-kubelet-test-kfdck, resource: bindings, ignored listing per whitelist
Jan 13 18:46:14.453: INFO: namespace e2e-tests-kubelet-test-kfdck deletion completed in 38.158863947s

• [SLOW TEST:42.337 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:46:14.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 13 18:46:14.585: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9cfeadfb-55cf-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-kjbc9" to be "success or failure"
Jan 13 18:46:14.601: INFO: Pod "downwardapi-volume-9cfeadfb-55cf-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 15.893158ms
Jan 13 18:46:16.606: INFO: Pod "downwardapi-volume-9cfeadfb-55cf-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020358996s
Jan 13 18:46:18.610: INFO: Pod "downwardapi-volume-9cfeadfb-55cf-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024314535s
STEP: Saw pod success
Jan 13 18:46:18.610: INFO: Pod "downwardapi-volume-9cfeadfb-55cf-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:46:18.613: INFO: Trying to get logs from node hunter-control-plane pod downwardapi-volume-9cfeadfb-55cf-11eb-8355-0242ac110009 container client-container: 
STEP: delete the pod
Jan 13 18:46:18.679: INFO: Waiting for pod downwardapi-volume-9cfeadfb-55cf-11eb-8355-0242ac110009 to disappear
Jan 13 18:46:18.684: INFO: Pod downwardapi-volume-9cfeadfb-55cf-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:46:18.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kjbc9" for this suite.
Jan 13 18:46:24.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:46:24.776: INFO: namespace: e2e-tests-projected-kjbc9, resource: bindings, ignored listing per whitelist
Jan 13 18:46:24.780: INFO: namespace e2e-tests-projected-kjbc9 deletion completed in 6.093602154s

• [SLOW TEST:10.326 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:46:24.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 13 18:46:29.479: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a32726d4-55cf-11eb-8355-0242ac110009"
Jan 13 18:46:29.479: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a32726d4-55cf-11eb-8355-0242ac110009" in namespace "e2e-tests-pods-gkgbh" to be "terminated due to deadline exceeded"
Jan 13 18:46:29.533: INFO: Pod "pod-update-activedeadlineseconds-a32726d4-55cf-11eb-8355-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 53.964373ms
Jan 13 18:46:31.537: INFO: Pod "pod-update-activedeadlineseconds-a32726d4-55cf-11eb-8355-0242ac110009": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.057893543s
Jan 13 18:46:31.537: INFO: Pod "pod-update-activedeadlineseconds-a32726d4-55cf-11eb-8355-0242ac110009" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:46:31.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-gkgbh" for this suite.
Jan 13 18:46:37.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:46:37.618: INFO: namespace: e2e-tests-pods-gkgbh, resource: bindings, ignored listing per whitelist
Jan 13 18:46:37.668: INFO: namespace e2e-tests-pods-gkgbh deletion completed in 6.126661371s

• [SLOW TEST:12.887 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:46:37.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-xl5bl
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-xl5bl
STEP: Deleting pre-stop pod
Jan 13 18:46:50.855: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:46:50.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-xl5bl" for this suite.
Jan 13 18:47:36.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:47:36.932: INFO: namespace: e2e-tests-prestop-xl5bl, resource: bindings, ignored listing per whitelist
Jan 13 18:47:36.970: INFO: namespace e2e-tests-prestop-xl5bl deletion completed in 46.09921186s

• [SLOW TEST:59.302 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:47:36.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-ce388299-55cf-11eb-8355-0242ac110009
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-ce388299-55cf-11eb-8355-0242ac110009
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:48:45.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-mj9z4" for this suite.
Jan 13 18:49:07.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:49:07.700: INFO: namespace: e2e-tests-configmap-mj9z4, resource: bindings, ignored listing per whitelist
Jan 13 18:49:07.743: INFO: namespace e2e-tests-configmap-mj9z4 deletion completed in 22.168930362s

• [SLOW TEST:90.773 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:49:07.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-044916e5-55d0-11eb-8355-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan 13 18:49:07.868: INFO: Waiting up to 5m0s for pod "pod-configmaps-044b7e3f-55d0-11eb-8355-0242ac110009" in namespace "e2e-tests-configmap-jkcdd" to be "success or failure"
Jan 13 18:49:07.872: INFO: Pod "pod-configmaps-044b7e3f-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.934215ms
Jan 13 18:49:10.074: INFO: Pod "pod-configmaps-044b7e3f-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205388002s
Jan 13 18:49:12.078: INFO: Pod "pod-configmaps-044b7e3f-55d0-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.20949108s
STEP: Saw pod success
Jan 13 18:49:12.078: INFO: Pod "pod-configmaps-044b7e3f-55d0-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:49:12.081: INFO: Trying to get logs from node hunter-control-plane pod pod-configmaps-044b7e3f-55d0-11eb-8355-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Jan 13 18:49:12.138: INFO: Waiting for pod pod-configmaps-044b7e3f-55d0-11eb-8355-0242ac110009 to disappear
Jan 13 18:49:12.148: INFO: Pod pod-configmaps-044b7e3f-55d0-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:49:12.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jkcdd" for this suite.
Jan 13 18:49:18.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:49:18.286: INFO: namespace: e2e-tests-configmap-jkcdd, resource: bindings, ignored listing per whitelist
Jan 13 18:49:18.287: INFO: namespace e2e-tests-configmap-jkcdd deletion completed in 6.136469541s

• [SLOW TEST:10.543 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:49:18.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-0a9144cb-55d0-11eb-8355-0242ac110009
STEP: Creating a pod to test consume secrets
Jan 13 18:49:18.396: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0a92f0c2-55d0-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-bbljn" to be "success or failure"
Jan 13 18:49:18.400: INFO: Pod "pod-projected-secrets-0a92f0c2-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.815854ms
Jan 13 18:49:20.404: INFO: Pod "pod-projected-secrets-0a92f0c2-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007813284s
Jan 13 18:49:22.408: INFO: Pod "pod-projected-secrets-0a92f0c2-55d0-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011098544s
STEP: Saw pod success
Jan 13 18:49:22.408: INFO: Pod "pod-projected-secrets-0a92f0c2-55d0-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:49:22.410: INFO: Trying to get logs from node hunter-control-plane pod pod-projected-secrets-0a92f0c2-55d0-11eb-8355-0242ac110009 container projected-secret-volume-test: 
STEP: delete the pod
Jan 13 18:49:22.432: INFO: Waiting for pod pod-projected-secrets-0a92f0c2-55d0-11eb-8355-0242ac110009 to disappear
Jan 13 18:49:22.479: INFO: Pod pod-projected-secrets-0a92f0c2-55d0-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:49:22.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bbljn" for this suite.
Jan 13 18:49:28.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:49:28.675: INFO: namespace: e2e-tests-projected-bbljn, resource: bindings, ignored listing per whitelist
Jan 13 18:49:28.742: INFO: namespace e2e-tests-projected-bbljn deletion completed in 6.119993269s

• [SLOW TEST:10.456 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:49:28.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:49:28.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-xdxct" for this suite.
Jan 13 18:49:34.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:49:34.958: INFO: namespace: e2e-tests-kubelet-test-xdxct, resource: bindings, ignored listing per whitelist
Jan 13 18:49:35.011: INFO: namespace e2e-tests-kubelet-test-xdxct deletion completed in 6.112552975s

• [SLOW TEST:6.268 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:49:35.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 13 18:49:35.120: INFO: Waiting up to 5m0s for pod "downwardapi-volume-148b16b8-55d0-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-m9dbm" to be "success or failure"
Jan 13 18:49:35.170: INFO: Pod "downwardapi-volume-148b16b8-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 49.957747ms
Jan 13 18:49:37.174: INFO: Pod "downwardapi-volume-148b16b8-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053315347s
Jan 13 18:49:39.634: INFO: Pod "downwardapi-volume-148b16b8-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.513376397s
Jan 13 18:49:41.638: INFO: Pod "downwardapi-volume-148b16b8-55d0-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.517794367s
STEP: Saw pod success
Jan 13 18:49:41.638: INFO: Pod "downwardapi-volume-148b16b8-55d0-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:49:41.641: INFO: Trying to get logs from node hunter-control-plane pod downwardapi-volume-148b16b8-55d0-11eb-8355-0242ac110009 container client-container: 
STEP: delete the pod
Jan 13 18:49:41.728: INFO: Waiting for pod downwardapi-volume-148b16b8-55d0-11eb-8355-0242ac110009 to disappear
Jan 13 18:49:41.777: INFO: Pod downwardapi-volume-148b16b8-55d0-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:49:41.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-m9dbm" for this suite.
Jan 13 18:49:47.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:49:47.914: INFO: namespace: e2e-tests-projected-m9dbm, resource: bindings, ignored listing per whitelist
Jan 13 18:49:47.941: INFO: namespace e2e-tests-projected-m9dbm deletion completed in 6.159865115s

• [SLOW TEST:12.930 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:49:47.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jan 13 18:49:48.068: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-7xvhd" to be "success or failure"
Jan 13 18:49:48.072: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.797412ms
Jan 13 18:49:50.104: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03570717s
Jan 13 18:49:52.116: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047838836s
Jan 13 18:49:54.120: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051646593s
STEP: Saw pod success
Jan 13 18:49:54.120: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 13 18:49:54.123: INFO: Trying to get logs from node hunter-control-plane pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 13 18:49:54.198: INFO: Waiting for pod pod-host-path-test to disappear
Jan 13 18:49:54.222: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:49:54.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-7xvhd" for this suite.
Jan 13 18:50:00.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:50:00.256: INFO: namespace: e2e-tests-hostpath-7xvhd, resource: bindings, ignored listing per whitelist
Jan 13 18:50:00.342: INFO: namespace e2e-tests-hostpath-7xvhd deletion completed in 6.117099281s

• [SLOW TEST:12.401 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:50:00.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 13 18:50:00.534: INFO: Waiting up to 5m0s for pod "pod-23ac12c2-55d0-11eb-8355-0242ac110009" in namespace "e2e-tests-emptydir-hl4jh" to be "success or failure"
Jan 13 18:50:00.546: INFO: Pod "pod-23ac12c2-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 11.562954ms
Jan 13 18:50:02.550: INFO: Pod "pod-23ac12c2-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016035671s
Jan 13 18:50:04.554: INFO: Pod "pod-23ac12c2-55d0-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019644197s
STEP: Saw pod success
Jan 13 18:50:04.554: INFO: Pod "pod-23ac12c2-55d0-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:50:04.556: INFO: Trying to get logs from node hunter-control-plane pod pod-23ac12c2-55d0-11eb-8355-0242ac110009 container test-container: 
STEP: delete the pod
Jan 13 18:50:04.668: INFO: Waiting for pod pod-23ac12c2-55d0-11eb-8355-0242ac110009 to disappear
Jan 13 18:50:04.677: INFO: Pod pod-23ac12c2-55d0-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:50:04.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hl4jh" for this suite.
Jan 13 18:50:10.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:50:10.725: INFO: namespace: e2e-tests-emptydir-hl4jh, resource: bindings, ignored listing per whitelist
Jan 13 18:50:10.782: INFO: namespace e2e-tests-emptydir-hl4jh deletion completed in 6.102137275s

• [SLOW TEST:10.440 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:50:10.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 13 18:50:10.942: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29e311f7-55d0-11eb-8355-0242ac110009" in namespace "e2e-tests-downward-api-5f275" to be "success or failure"
Jan 13 18:50:10.951: INFO: Pod "downwardapi-volume-29e311f7-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 9.709705ms
Jan 13 18:50:12.956: INFO: Pod "downwardapi-volume-29e311f7-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014219219s
Jan 13 18:50:14.960: INFO: Pod "downwardapi-volume-29e311f7-55d0-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018774489s
STEP: Saw pod success
Jan 13 18:50:14.961: INFO: Pod "downwardapi-volume-29e311f7-55d0-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:50:14.963: INFO: Trying to get logs from node hunter-control-plane pod downwardapi-volume-29e311f7-55d0-11eb-8355-0242ac110009 container client-container: 
STEP: delete the pod
Jan 13 18:50:14.983: INFO: Waiting for pod downwardapi-volume-29e311f7-55d0-11eb-8355-0242ac110009 to disappear
Jan 13 18:50:14.987: INFO: Pod downwardapi-volume-29e311f7-55d0-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:50:14.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5f275" for this suite.
Jan 13 18:50:21.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:50:21.026: INFO: namespace: e2e-tests-downward-api-5f275, resource: bindings, ignored listing per whitelist
Jan 13 18:50:21.100: INFO: namespace e2e-tests-downward-api-5f275 deletion completed in 6.109503022s

• [SLOW TEST:10.317 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:50:21.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-wl8p6
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 13 18:50:21.199: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 13 18:50:43.378: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.0.45:8080/dial?request=hostName&protocol=udp&host=10.244.0.44&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-wl8p6 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 13 18:50:43.378: INFO: >>> kubeConfig: /root/.kube/config
I0113 18:50:43.416588       6 log.go:172] (0xc0008f42c0) (0xc0012e1900) Create stream
I0113 18:50:43.416616       6 log.go:172] (0xc0008f42c0) (0xc0012e1900) Stream added, broadcasting: 1
I0113 18:50:43.419729       6 log.go:172] (0xc0008f42c0) Reply frame received for 1
I0113 18:50:43.419776       6 log.go:172] (0xc0008f42c0) (0xc0009832c0) Create stream
I0113 18:50:43.419788       6 log.go:172] (0xc0008f42c0) (0xc0009832c0) Stream added, broadcasting: 3
I0113 18:50:43.420620       6 log.go:172] (0xc0008f42c0) Reply frame received for 3
I0113 18:50:43.420639       6 log.go:172] (0xc0008f42c0) (0xc000f181e0) Create stream
I0113 18:50:43.420648       6 log.go:172] (0xc0008f42c0) (0xc000f181e0) Stream added, broadcasting: 5
I0113 18:50:43.421673       6 log.go:172] (0xc0008f42c0) Reply frame received for 5
I0113 18:50:43.521572       6 log.go:172] (0xc0008f42c0) Data frame received for 3
I0113 18:50:43.521603       6 log.go:172] (0xc0009832c0) (3) Data frame handling
I0113 18:50:43.521623       6 log.go:172] (0xc0009832c0) (3) Data frame sent
I0113 18:50:43.522306       6 log.go:172] (0xc0008f42c0) Data frame received for 3
I0113 18:50:43.522340       6 log.go:172] (0xc0009832c0) (3) Data frame handling
I0113 18:50:43.522646       6 log.go:172] (0xc0008f42c0) Data frame received for 5
I0113 18:50:43.522678       6 log.go:172] (0xc000f181e0) (5) Data frame handling
I0113 18:50:43.524371       6 log.go:172] (0xc0008f42c0) Data frame received for 1
I0113 18:50:43.524401       6 log.go:172] (0xc0012e1900) (1) Data frame handling
I0113 18:50:43.524427       6 log.go:172] (0xc0012e1900) (1) Data frame sent
I0113 18:50:43.524465       6 log.go:172] (0xc0008f42c0) (0xc0012e1900) Stream removed, broadcasting: 1
I0113 18:50:43.524493       6 log.go:172] (0xc0008f42c0) Go away received
I0113 18:50:43.524578       6 log.go:172] (0xc0008f42c0) (0xc0012e1900) Stream removed, broadcasting: 1
I0113 18:50:43.524597       6 log.go:172] (0xc0008f42c0) (0xc0009832c0) Stream removed, broadcasting: 3
I0113 18:50:43.524605       6 log.go:172] (0xc0008f42c0) (0xc000f181e0) Stream removed, broadcasting: 5
Jan 13 18:50:43.524: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:50:43.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-wl8p6" for this suite.
Jan 13 18:51:07.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:51:07.587: INFO: namespace: e2e-tests-pod-network-test-wl8p6, resource: bindings, ignored listing per whitelist
Jan 13 18:51:07.632: INFO: namespace e2e-tests-pod-network-test-wl8p6 deletion completed in 24.103710502s

• [SLOW TEST:46.532 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:51:07.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-4bbc4df8-55d0-11eb-8355-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan 13 18:51:07.753: INFO: Waiting up to 5m0s for pod "pod-configmaps-4bbea53c-55d0-11eb-8355-0242ac110009" in namespace "e2e-tests-configmap-pmb4m" to be "success or failure"
Jan 13 18:51:07.762: INFO: Pod "pod-configmaps-4bbea53c-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 9.607979ms
Jan 13 18:51:09.766: INFO: Pod "pod-configmaps-4bbea53c-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013400099s
Jan 13 18:51:11.771: INFO: Pod "pod-configmaps-4bbea53c-55d0-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018235536s
STEP: Saw pod success
Jan 13 18:51:11.771: INFO: Pod "pod-configmaps-4bbea53c-55d0-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:51:11.774: INFO: Trying to get logs from node hunter-control-plane pod pod-configmaps-4bbea53c-55d0-11eb-8355-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Jan 13 18:51:11.793: INFO: Waiting for pod pod-configmaps-4bbea53c-55d0-11eb-8355-0242ac110009 to disappear
Jan 13 18:51:11.798: INFO: Pod pod-configmaps-4bbea53c-55d0-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:51:11.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-pmb4m" for this suite.
Jan 13 18:51:17.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:51:17.875: INFO: namespace: e2e-tests-configmap-pmb4m, resource: bindings, ignored listing per whitelist
Jan 13 18:51:17.877: INFO: namespace e2e-tests-configmap-pmb4m deletion completed in 6.07681561s

• [SLOW TEST:10.245 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:51:17.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 13 18:51:22.043: INFO: Waiting up to 5m0s for pod "client-envvars-54460177-55d0-11eb-8355-0242ac110009" in namespace "e2e-tests-pods-ks7hj" to be "success or failure"
Jan 13 18:51:22.087: INFO: Pod "client-envvars-54460177-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 44.522233ms
Jan 13 18:51:24.091: INFO: Pod "client-envvars-54460177-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047933381s
Jan 13 18:51:26.095: INFO: Pod "client-envvars-54460177-55d0-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052428666s
STEP: Saw pod success
Jan 13 18:51:26.095: INFO: Pod "client-envvars-54460177-55d0-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:51:26.099: INFO: Trying to get logs from node hunter-control-plane pod client-envvars-54460177-55d0-11eb-8355-0242ac110009 container env3cont: 
STEP: delete the pod
Jan 13 18:51:26.122: INFO: Waiting for pod client-envvars-54460177-55d0-11eb-8355-0242ac110009 to disappear
Jan 13 18:51:26.126: INFO: Pod client-envvars-54460177-55d0-11eb-8355-0242ac110009 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:51:26.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-ks7hj" for this suite.
Jan 13 18:52:04.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:52:04.231: INFO: namespace: e2e-tests-pods-ks7hj, resource: bindings, ignored listing per whitelist
Jan 13 18:52:04.305: INFO: namespace e2e-tests-pods-ks7hj deletion completed in 38.152246624s

• [SLOW TEST:46.428 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:52:04.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-6d89024d-55d0-11eb-8355-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan 13 18:52:04.443: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6d8b8920-55d0-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-2b5vg" to be "success or failure"
Jan 13 18:52:04.447: INFO: Pod "pod-projected-configmaps-6d8b8920-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067067ms
Jan 13 18:52:06.451: INFO: Pod "pod-projected-configmaps-6d8b8920-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008165866s
Jan 13 18:52:08.455: INFO: Pod "pod-projected-configmaps-6d8b8920-55d0-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012614365s
STEP: Saw pod success
Jan 13 18:52:08.455: INFO: Pod "pod-projected-configmaps-6d8b8920-55d0-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:52:08.458: INFO: Trying to get logs from node hunter-control-plane pod pod-projected-configmaps-6d8b8920-55d0-11eb-8355-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 13 18:52:08.475: INFO: Waiting for pod pod-projected-configmaps-6d8b8920-55d0-11eb-8355-0242ac110009 to disappear
Jan 13 18:52:08.480: INFO: Pod pod-projected-configmaps-6d8b8920-55d0-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:52:08.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2b5vg" for this suite.
Jan 13 18:52:14.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:52:14.516: INFO: namespace: e2e-tests-projected-2b5vg, resource: bindings, ignored listing per whitelist
Jan 13 18:52:14.586: INFO: namespace e2e-tests-projected-2b5vg deletion completed in 6.10311287s

• [SLOW TEST:10.281 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:52:14.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jan 13 18:52:18.749: INFO: Pod pod-hostip-73ac8343-55d0-11eb-8355-0242ac110009 has hostIP: 172.18.0.11
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:52:18.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-bzstc" for this suite.
Jan 13 18:52:40.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:52:40.794: INFO: namespace: e2e-tests-pods-bzstc, resource: bindings, ignored listing per whitelist
Jan 13 18:52:40.851: INFO: namespace e2e-tests-pods-bzstc deletion completed in 22.09908966s

• [SLOW TEST:26.264 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:52:40.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-834bf9c9-55d0-11eb-8355-0242ac110009
STEP: Creating a pod to test consume secrets
Jan 13 18:52:40.953: INFO: Waiting up to 5m0s for pod "pod-secrets-834c9deb-55d0-11eb-8355-0242ac110009" in namespace "e2e-tests-secrets-nntld" to be "success or failure"
Jan 13 18:52:40.963: INFO: Pod "pod-secrets-834c9deb-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 9.676463ms
Jan 13 18:52:42.966: INFO: Pod "pod-secrets-834c9deb-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0131886s
Jan 13 18:52:44.970: INFO: Pod "pod-secrets-834c9deb-55d0-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017447131s
STEP: Saw pod success
Jan 13 18:52:44.971: INFO: Pod "pod-secrets-834c9deb-55d0-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:52:44.973: INFO: Trying to get logs from node hunter-control-plane pod pod-secrets-834c9deb-55d0-11eb-8355-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Jan 13 18:52:45.112: INFO: Waiting for pod pod-secrets-834c9deb-55d0-11eb-8355-0242ac110009 to disappear
Jan 13 18:52:45.130: INFO: Pod pod-secrets-834c9deb-55d0-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:52:45.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-nntld" for this suite.
Jan 13 18:52:51.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:52:51.168: INFO: namespace: e2e-tests-secrets-nntld, resource: bindings, ignored listing per whitelist
Jan 13 18:52:51.235: INFO: namespace e2e-tests-secrets-nntld deletion completed in 6.101691001s

• [SLOW TEST:10.384 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:52:51.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 13 18:52:51.351: INFO: Waiting up to 5m0s for pod "downwardapi-volume-897f3149-55d0-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-svnw7" to be "success or failure"
Jan 13 18:52:51.393: INFO: Pod "downwardapi-volume-897f3149-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 41.678948ms
Jan 13 18:52:53.397: INFO: Pod "downwardapi-volume-897f3149-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045306752s
Jan 13 18:52:55.401: INFO: Pod "downwardapi-volume-897f3149-55d0-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049622967s
STEP: Saw pod success
Jan 13 18:52:55.401: INFO: Pod "downwardapi-volume-897f3149-55d0-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:52:55.404: INFO: Trying to get logs from node hunter-control-plane pod downwardapi-volume-897f3149-55d0-11eb-8355-0242ac110009 container client-container: 
STEP: delete the pod
Jan 13 18:52:55.422: INFO: Waiting for pod downwardapi-volume-897f3149-55d0-11eb-8355-0242ac110009 to disappear
Jan 13 18:52:55.428: INFO: Pod downwardapi-volume-897f3149-55d0-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:52:55.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-svnw7" for this suite.
Jan 13 18:53:01.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:53:01.537: INFO: namespace: e2e-tests-projected-svnw7, resource: bindings, ignored listing per whitelist
Jan 13 18:53:01.542: INFO: namespace e2e-tests-projected-svnw7 deletion completed in 6.111792606s

• [SLOW TEST:10.307 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:53:01.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 13 18:53:01.674: INFO: Waiting up to 5m0s for pod "downward-api-8fa749e7-55d0-11eb-8355-0242ac110009" in namespace "e2e-tests-downward-api-t6754" to be "success or failure"
Jan 13 18:53:01.685: INFO: Pod "downward-api-8fa749e7-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 10.890495ms
Jan 13 18:53:03.835: INFO: Pod "downward-api-8fa749e7-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16089818s
Jan 13 18:53:05.839: INFO: Pod "downward-api-8fa749e7-55d0-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.16457301s
STEP: Saw pod success
Jan 13 18:53:05.839: INFO: Pod "downward-api-8fa749e7-55d0-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:53:05.843: INFO: Trying to get logs from node hunter-control-plane pod downward-api-8fa749e7-55d0-11eb-8355-0242ac110009 container dapi-container: 
STEP: delete the pod
Jan 13 18:53:05.891: INFO: Waiting for pod downward-api-8fa749e7-55d0-11eb-8355-0242ac110009 to disappear
Jan 13 18:53:05.903: INFO: Pod downward-api-8fa749e7-55d0-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:53:05.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-t6754" for this suite.
Jan 13 18:53:11.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:53:11.952: INFO: namespace: e2e-tests-downward-api-t6754, resource: bindings, ignored listing per whitelist
Jan 13 18:53:12.032: INFO: namespace e2e-tests-downward-api-t6754 deletion completed in 6.125476291s

• [SLOW TEST:10.489 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:53:12.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-95e737c8-55d0-11eb-8355-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan 13 18:53:12.199: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-95edce29-55d0-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-bntr6" to be "success or failure"
Jan 13 18:53:12.203: INFO: Pod "pod-projected-configmaps-95edce29-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.558609ms
Jan 13 18:53:14.209: INFO: Pod "pod-projected-configmaps-95edce29-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009760074s
Jan 13 18:53:16.215: INFO: Pod "pod-projected-configmaps-95edce29-55d0-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015514957s
STEP: Saw pod success
Jan 13 18:53:16.215: INFO: Pod "pod-projected-configmaps-95edce29-55d0-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:53:16.217: INFO: Trying to get logs from node hunter-control-plane pod pod-projected-configmaps-95edce29-55d0-11eb-8355-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 13 18:53:16.259: INFO: Waiting for pod pod-projected-configmaps-95edce29-55d0-11eb-8355-0242ac110009 to disappear
Jan 13 18:53:16.291: INFO: Pod pod-projected-configmaps-95edce29-55d0-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:53:16.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bntr6" for this suite.
Jan 13 18:53:22.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:53:22.372: INFO: namespace: e2e-tests-projected-bntr6, resource: bindings, ignored listing per whitelist
Jan 13 18:53:22.420: INFO: namespace e2e-tests-projected-bntr6 deletion completed in 6.126410309s

• [SLOW TEST:10.389 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:53:22.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-9c14f66e-55d0-11eb-8355-0242ac110009
STEP: Creating a pod to test consume secrets
Jan 13 18:53:22.609: INFO: Waiting up to 5m0s for pod "pod-secrets-9c16e8a4-55d0-11eb-8355-0242ac110009" in namespace "e2e-tests-secrets-jbhwq" to be "success or failure"
Jan 13 18:53:22.611: INFO: Pod "pod-secrets-9c16e8a4-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.518862ms
Jan 13 18:53:24.680: INFO: Pod "pod-secrets-9c16e8a4-55d0-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071573167s
Jan 13 18:53:26.685: INFO: Pod "pod-secrets-9c16e8a4-55d0-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076041773s
STEP: Saw pod success
Jan 13 18:53:26.685: INFO: Pod "pod-secrets-9c16e8a4-55d0-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 18:53:26.688: INFO: Trying to get logs from node hunter-control-plane pod pod-secrets-9c16e8a4-55d0-11eb-8355-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Jan 13 18:53:26.711: INFO: Waiting for pod pod-secrets-9c16e8a4-55d0-11eb-8355-0242ac110009 to disappear
Jan 13 18:53:26.715: INFO: Pod pod-secrets-9c16e8a4-55d0-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:53:26.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-jbhwq" for this suite.
Jan 13 18:53:32.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:53:32.797: INFO: namespace: e2e-tests-secrets-jbhwq, resource: bindings, ignored listing per whitelist
Jan 13 18:53:32.831: INFO: namespace e2e-tests-secrets-jbhwq deletion completed in 6.113662144s

• [SLOW TEST:10.411 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:53:32.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-l7hwf
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-l7hwf
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-l7hwf
Jan 13 18:53:32.984: INFO: Found 0 stateful pods, waiting for 1
Jan 13 18:53:42.989: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 13 18:53:42.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 13 18:53:43.253: INFO: stderr: "I0113 18:53:43.134527    3029 log.go:172] (0xc000138790) (0xc0005c3400) Create stream\nI0113 18:53:43.134577    3029 log.go:172] (0xc000138790) (0xc0005c3400) Stream added, broadcasting: 1\nI0113 18:53:43.138740    3029 log.go:172] (0xc000138790) Reply frame received for 1\nI0113 18:53:43.138825    3029 log.go:172] (0xc000138790) (0xc0002f2000) Create stream\nI0113 18:53:43.138846    3029 log.go:172] (0xc000138790) (0xc0002f2000) Stream added, broadcasting: 3\nI0113 18:53:43.139812    3029 log.go:172] (0xc000138790) Reply frame received for 3\nI0113 18:53:43.139883    3029 log.go:172] (0xc000138790) (0xc0006ac000) Create stream\nI0113 18:53:43.139912    3029 log.go:172] (0xc000138790) (0xc0006ac000) Stream added, broadcasting: 5\nI0113 18:53:43.140943    3029 log.go:172] (0xc000138790) Reply frame received for 5\nI0113 18:53:43.246084    3029 log.go:172] (0xc000138790) Data frame received for 5\nI0113 18:53:43.246137    3029 log.go:172] (0xc0006ac000) (5) Data frame handling\nI0113 18:53:43.246166    3029 log.go:172] (0xc000138790) Data frame received for 3\nI0113 18:53:43.246177    3029 log.go:172] (0xc0002f2000) (3) Data frame handling\nI0113 18:53:43.246186    3029 log.go:172] (0xc0002f2000) (3) Data frame sent\nI0113 18:53:43.246195    3029 log.go:172] (0xc000138790) Data frame received for 3\nI0113 18:53:43.246205    3029 log.go:172] (0xc0002f2000) (3) Data frame handling\nI0113 18:53:43.248294    3029 log.go:172] (0xc000138790) Data frame received for 1\nI0113 18:53:43.248309    3029 log.go:172] (0xc0005c3400) (1) Data frame handling\nI0113 18:53:43.248316    3029 log.go:172] (0xc0005c3400) (1) Data frame sent\nI0113 18:53:43.248324    3029 log.go:172] (0xc000138790) (0xc0005c3400) Stream removed, broadcasting: 1\nI0113 18:53:43.248342    3029 log.go:172] (0xc000138790) Go away received\nI0113 18:53:43.248513    3029 log.go:172] (0xc000138790) (0xc0005c3400) Stream removed, broadcasting: 1\nI0113 18:53:43.248537    3029 log.go:172] (0xc000138790) (0xc0002f2000) Stream removed, broadcasting: 3\nI0113 18:53:43.248547    3029 log.go:172] (0xc000138790) (0xc0006ac000) Stream removed, broadcasting: 5\n"
Jan 13 18:53:43.253: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 13 18:53:43.253: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 13 18:53:43.256: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 13 18:53:53.261: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 13 18:53:53.261: INFO: Waiting for statefulset status.replicas updated to 0
Jan 13 18:53:53.288: INFO: POD   NODE                  PHASE    GRACE  CONDITIONS
Jan 13 18:53:53.288: INFO: ss-0  hunter-control-plane  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  }]
Jan 13 18:53:53.288: INFO: 
Jan 13 18:53:53.288: INFO: StatefulSet ss has not reached scale 3, at 1
Jan 13 18:53:54.310: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.983144912s
Jan 13 18:53:55.315: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.961076145s
Jan 13 18:53:56.320: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.95587504s
Jan 13 18:53:57.324: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.950833433s
Jan 13 18:53:58.330: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.946500244s
Jan 13 18:53:59.334: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.940942649s
Jan 13 18:54:00.339: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.936645327s
Jan 13 18:54:01.344: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.931689242s
Jan 13 18:54:02.349: INFO: Verifying statefulset ss doesn't scale past 3 for another 926.465882ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-l7hwf
Jan 13 18:54:03.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:54:03.615: INFO: stderr: "I0113 18:54:03.512274    3053 log.go:172] (0xc000202420) (0xc000748640) Create stream\nI0113 18:54:03.512358    3053 log.go:172] (0xc000202420) (0xc000748640) Stream added, broadcasting: 1\nI0113 18:54:03.518174    3053 log.go:172] (0xc000202420) Reply frame received for 1\nI0113 18:54:03.518244    3053 log.go:172] (0xc000202420) (0xc0004dac80) Create stream\nI0113 18:54:03.518273    3053 log.go:172] (0xc000202420) (0xc0004dac80) Stream added, broadcasting: 3\nI0113 18:54:03.519411    3053 log.go:172] (0xc000202420) Reply frame received for 3\nI0113 18:54:03.519464    3053 log.go:172] (0xc000202420) (0xc0007486e0) Create stream\nI0113 18:54:03.519492    3053 log.go:172] (0xc000202420) (0xc0007486e0) Stream added, broadcasting: 5\nI0113 18:54:03.520394    3053 log.go:172] (0xc000202420) Reply frame received for 5\nI0113 18:54:03.608007    3053 log.go:172] (0xc000202420) Data frame received for 5\nI0113 18:54:03.608053    3053 log.go:172] (0xc0007486e0) (5) Data frame handling\nI0113 18:54:03.608082    3053 log.go:172] (0xc000202420) Data frame received for 3\nI0113 18:54:03.608093    3053 log.go:172] (0xc0004dac80) (3) Data frame handling\nI0113 18:54:03.608106    3053 log.go:172] (0xc0004dac80) (3) Data frame sent\nI0113 18:54:03.608118    3053 log.go:172] (0xc000202420) Data frame received for 3\nI0113 18:54:03.608128    3053 log.go:172] (0xc0004dac80) (3) Data frame handling\nI0113 18:54:03.609553    3053 log.go:172] (0xc000202420) Data frame received for 1\nI0113 18:54:03.609587    3053 log.go:172] (0xc000748640) (1) Data frame handling\nI0113 18:54:03.609606    3053 log.go:172] (0xc000748640) (1) Data frame sent\nI0113 18:54:03.609622    3053 log.go:172] (0xc000202420) (0xc000748640) Stream removed, broadcasting: 1\nI0113 18:54:03.609640    3053 log.go:172] (0xc000202420) Go away received\nI0113 18:54:03.609880    3053 log.go:172] (0xc000202420) (0xc000748640) Stream removed, broadcasting: 1\nI0113 18:54:03.609917    3053 log.go:172] (0xc000202420) (0xc0004dac80) Stream removed, broadcasting: 3\nI0113 18:54:03.609938    3053 log.go:172] (0xc000202420) (0xc0007486e0) Stream removed, broadcasting: 5\n"
Jan 13 18:54:03.615: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 13 18:54:03.615: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 13 18:54:03.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:54:03.824: INFO: stderr: "I0113 18:54:03.745823    3075 log.go:172] (0xc000138630) (0xc000732640) Create stream\nI0113 18:54:03.745890    3075 log.go:172] (0xc000138630) (0xc000732640) Stream added, broadcasting: 1\nI0113 18:54:03.748822    3075 log.go:172] (0xc000138630) Reply frame received for 1\nI0113 18:54:03.748953    3075 log.go:172] (0xc000138630) (0xc0005bebe0) Create stream\nI0113 18:54:03.748972    3075 log.go:172] (0xc000138630) (0xc0005bebe0) Stream added, broadcasting: 3\nI0113 18:54:03.749812    3075 log.go:172] (0xc000138630) Reply frame received for 3\nI0113 18:54:03.749839    3075 log.go:172] (0xc000138630) (0xc00064a000) Create stream\nI0113 18:54:03.749848    3075 log.go:172] (0xc000138630) (0xc00064a000) Stream added, broadcasting: 5\nI0113 18:54:03.750591    3075 log.go:172] (0xc000138630) Reply frame received for 5\nI0113 18:54:03.817822    3075 log.go:172] (0xc000138630) Data frame received for 3\nI0113 18:54:03.817854    3075 log.go:172] (0xc0005bebe0) (3) Data frame handling\nI0113 18:54:03.817865    3075 log.go:172] (0xc0005bebe0) (3) Data frame sent\nI0113 18:54:03.817872    3075 log.go:172] (0xc000138630) Data frame received for 3\nI0113 18:54:03.817879    3075 log.go:172] (0xc0005bebe0) (3) Data frame handling\nI0113 18:54:03.817909    3075 log.go:172] (0xc000138630) Data frame received for 5\nI0113 18:54:03.817917    3075 log.go:172] (0xc00064a000) (5) Data frame handling\nI0113 18:54:03.817925    3075 log.go:172] (0xc00064a000) (5) Data frame sent\nI0113 18:54:03.817932    3075 log.go:172] (0xc000138630) Data frame received for 5\nI0113 18:54:03.817941    3075 log.go:172] (0xc00064a000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0113 18:54:03.819569    3075 log.go:172] (0xc000138630) Data frame received for 1\nI0113 18:54:03.819604    3075 log.go:172] (0xc000732640) (1) Data frame handling\nI0113 18:54:03.819622    3075 log.go:172] (0xc000732640) (1) Data frame sent\nI0113 18:54:03.819637    3075 log.go:172] (0xc000138630) (0xc000732640) Stream removed, broadcasting: 1\nI0113 18:54:03.819815    3075 log.go:172] (0xc000138630) (0xc000732640) Stream removed, broadcasting: 1\nI0113 18:54:03.819829    3075 log.go:172] (0xc000138630) (0xc0005bebe0) Stream removed, broadcasting: 3\nI0113 18:54:03.819894    3075 log.go:172] (0xc000138630) Go away received\nI0113 18:54:03.819939    3075 log.go:172] (0xc000138630) (0xc00064a000) Stream removed, broadcasting: 5\n"
Jan 13 18:54:03.824: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 13 18:54:03.824: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 13 18:54:03.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:54:04.029: INFO: stderr: "I0113 18:54:03.950535    3098 log.go:172] (0xc0007e2420) (0xc000744640) Create stream\nI0113 18:54:03.950599    3098 log.go:172] (0xc0007e2420) (0xc000744640) Stream added, broadcasting: 1\nI0113 18:54:03.953006    3098 log.go:172] (0xc0007e2420) Reply frame received for 1\nI0113 18:54:03.953059    3098 log.go:172] (0xc0007e2420) (0xc0007446e0) Create stream\nI0113 18:54:03.953072    3098 log.go:172] (0xc0007e2420) (0xc0007446e0) Stream added, broadcasting: 3\nI0113 18:54:03.953838    3098 log.go:172] (0xc0007e2420) Reply frame received for 3\nI0113 18:54:03.953881    3098 log.go:172] (0xc0007e2420) (0xc00061ce60) Create stream\nI0113 18:54:03.953890    3098 log.go:172] (0xc0007e2420) (0xc00061ce60) Stream added, broadcasting: 5\nI0113 18:54:03.954510    3098 log.go:172] (0xc0007e2420) Reply frame received for 5\nI0113 18:54:04.021602    3098 log.go:172] (0xc0007e2420) Data frame received for 5\nI0113 18:54:04.021639    3098 log.go:172] (0xc00061ce60) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0113 18:54:04.021670    3098 log.go:172] (0xc0007e2420) Data frame received for 3\nI0113 18:54:04.021705    3098 log.go:172] (0xc0007446e0) (3) Data frame handling\nI0113 18:54:04.021714    3098 log.go:172] (0xc0007446e0) (3) Data frame sent\nI0113 18:54:04.021721    3098 log.go:172] (0xc0007e2420) Data frame received for 3\nI0113 18:54:04.021746    3098 log.go:172] (0xc00061ce60) (5) Data frame sent\nI0113 18:54:04.021795    3098 log.go:172] (0xc0007e2420) Data frame received for 5\nI0113 18:54:04.021818    3098 log.go:172] (0xc00061ce60) (5) Data frame handling\nI0113 18:54:04.021856    3098 log.go:172] (0xc0007446e0) (3) Data frame handling\nI0113 18:54:04.023938    3098 log.go:172] (0xc0007e2420) Data frame received for 1\nI0113 18:54:04.024042    3098 log.go:172] (0xc000744640) (1) Data frame handling\nI0113 18:54:04.024080    3098 log.go:172] (0xc000744640) (1) Data frame sent\nI0113 18:54:04.024098    3098 log.go:172] (0xc0007e2420) (0xc000744640) Stream removed, broadcasting: 1\nI0113 18:54:04.024128    3098 log.go:172] (0xc0007e2420) Go away received\nI0113 18:54:04.024416    3098 log.go:172] (0xc0007e2420) (0xc000744640) Stream removed, broadcasting: 1\nI0113 18:54:04.024449    3098 log.go:172] (0xc0007e2420) (0xc0007446e0) Stream removed, broadcasting: 3\nI0113 18:54:04.024468    3098 log.go:172] (0xc0007e2420) (0xc00061ce60) Stream removed, broadcasting: 5\n"
Jan 13 18:54:04.029: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 13 18:54:04.029: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 13 18:54:04.033: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Jan 13 18:54:14.039: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 13 18:54:14.039: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 13 18:54:14.039: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 13 18:54:14.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 13 18:54:14.271: INFO: stderr: "I0113 18:54:14.167474    3121 log.go:172] (0xc0001386e0) (0xc000714640) Create stream\nI0113 18:54:14.167531    3121 log.go:172] (0xc0001386e0) (0xc000714640) Stream added, broadcasting: 1\nI0113 18:54:14.169916    3121 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0113 18:54:14.169969    3121 log.go:172] (0xc0001386e0) (0xc000698be0) Create stream\nI0113 18:54:14.169985    3121 log.go:172] (0xc0001386e0) (0xc000698be0) Stream added, broadcasting: 3\nI0113 18:54:14.170999    3121 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0113 18:54:14.171044    3121 log.go:172] (0xc0001386e0) (0xc0007146e0) Create stream\nI0113 18:54:14.171057    3121 log.go:172] (0xc0001386e0) (0xc0007146e0) Stream added, broadcasting: 5\nI0113 18:54:14.172078    3121 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0113 18:54:14.264396    3121 log.go:172] (0xc0001386e0) Data frame received for 3\nI0113 18:54:14.264470    3121 log.go:172] (0xc000698be0) (3) Data frame handling\nI0113 18:54:14.264506    3121 log.go:172] (0xc000698be0) (3) Data frame sent\nI0113 18:54:14.264538    3121 log.go:172] (0xc0001386e0) Data frame received for 3\nI0113 18:54:14.264586    3121 log.go:172] (0xc0001386e0) Data frame received for 5\nI0113 18:54:14.264643    3121 log.go:172] (0xc0007146e0) (5) Data frame handling\nI0113 18:54:14.264675    3121 log.go:172] (0xc000698be0) (3) Data frame handling\nI0113 18:54:14.266033    3121 log.go:172] (0xc0001386e0) Data frame received for 1\nI0113 18:54:14.266074    3121 log.go:172] (0xc000714640) (1) Data frame handling\nI0113 18:54:14.266102    3121 log.go:172] (0xc000714640) (1) Data frame sent\nI0113 18:54:14.266126    3121 log.go:172] (0xc0001386e0) (0xc000714640) Stream removed, broadcasting: 1\nI0113 18:54:14.266167    3121 log.go:172] (0xc0001386e0) Go away received\nI0113 18:54:14.266503    3121 log.go:172] (0xc0001386e0) (0xc000714640) Stream removed, broadcasting: 1\nI0113 18:54:14.266547    3121 log.go:172] (0xc0001386e0) (0xc000698be0) Stream removed, broadcasting: 3\nI0113 18:54:14.266578    3121 log.go:172] (0xc0001386e0) (0xc0007146e0) Stream removed, broadcasting: 5\n"
Jan 13 18:54:14.271: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 13 18:54:14.271: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 13 18:54:14.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 13 18:54:14.522: INFO: stderr: "I0113 18:54:14.393869    3143 log.go:172] (0xc0007bc160) (0xc0008a6500) Create stream\nI0113 18:54:14.393918    3143 log.go:172] (0xc0007bc160) (0xc0008a6500) Stream added, broadcasting: 1\nI0113 18:54:14.396252    3143 log.go:172] (0xc0007bc160) Reply frame received for 1\nI0113 18:54:14.396291    3143 log.go:172] (0xc0007bc160) (0xc0005cc000) Create stream\nI0113 18:54:14.396302    3143 log.go:172] (0xc0007bc160) (0xc0005cc000) Stream added, broadcasting: 3\nI0113 18:54:14.397491    3143 log.go:172] (0xc0007bc160) Reply frame received for 3\nI0113 18:54:14.397563    3143 log.go:172] (0xc0007bc160) (0xc0008a65a0) Create stream\nI0113 18:54:14.397591    3143 log.go:172] (0xc0007bc160) (0xc0008a65a0) Stream added, broadcasting: 5\nI0113 18:54:14.398655    3143 log.go:172] (0xc0007bc160) Reply frame received for 5\nI0113 18:54:14.516048    3143 log.go:172] (0xc0007bc160) Data frame received for 5\nI0113 18:54:14.516098    3143 log.go:172] (0xc0007bc160) Data frame received for 3\nI0113 18:54:14.516156    3143 log.go:172] (0xc0005cc000) (3) Data frame handling\nI0113 18:54:14.516194    3143 log.go:172] (0xc0005cc000) (3) Data frame sent\nI0113 18:54:14.516236    3143 log.go:172] (0xc0007bc160) Data frame received for 3\nI0113 18:54:14.516256    3143 log.go:172] (0xc0005cc000) (3) Data frame handling\nI0113 18:54:14.516408    3143 log.go:172] (0xc0008a65a0) (5) Data frame handling\nI0113 18:54:14.517867    3143 log.go:172] (0xc0007bc160) Data frame received for 1\nI0113 18:54:14.517895    3143 log.go:172] (0xc0008a6500) (1) Data frame handling\nI0113 18:54:14.517910    3143 log.go:172] (0xc0008a6500) (1) Data frame sent\nI0113 18:54:14.517938    3143 log.go:172] (0xc0007bc160) (0xc0008a6500) Stream removed, broadcasting: 1\nI0113 18:54:14.517968    3143 log.go:172] (0xc0007bc160) Go away received\nI0113 18:54:14.518169    3143 log.go:172] (0xc0007bc160) (0xc0008a6500) Stream removed, broadcasting: 1\nI0113 18:54:14.518203    3143 log.go:172] (0xc0007bc160) (0xc0005cc000) Stream removed, broadcasting: 3\nI0113 18:54:14.518221    3143 log.go:172] (0xc0007bc160) (0xc0008a65a0) Stream removed, broadcasting: 5\n"
Jan 13 18:54:14.522: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 13 18:54:14.522: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 13 18:54:14.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 13 18:54:14.763: INFO: stderr: "I0113 18:54:14.646549    3165 log.go:172] (0xc0008a62c0) (0xc000774640) Create stream\nI0113 18:54:14.646624    3165 log.go:172] (0xc0008a62c0) (0xc000774640) Stream added, broadcasting: 1\nI0113 18:54:14.648590    3165 log.go:172] (0xc0008a62c0) Reply frame received for 1\nI0113 18:54:14.648635    3165 log.go:172] (0xc0008a62c0) (0xc000608f00) Create stream\nI0113 18:54:14.648651    3165 log.go:172] (0xc0008a62c0) (0xc000608f00) Stream added, broadcasting: 3\nI0113 18:54:14.649545    3165 log.go:172] (0xc0008a62c0) Reply frame received for 3\nI0113 18:54:14.649599    3165 log.go:172] (0xc0008a62c0) (0xc000609040) Create stream\nI0113 18:54:14.649621    3165 log.go:172] (0xc0008a62c0) (0xc000609040) Stream added, broadcasting: 5\nI0113 18:54:14.650430    3165 log.go:172] (0xc0008a62c0) Reply frame received for 5\nI0113 18:54:14.754704    3165 log.go:172] (0xc0008a62c0) Data frame received for 3\nI0113 18:54:14.754747    3165 log.go:172] (0xc000608f00) (3) Data frame handling\nI0113 18:54:14.754775    3165 log.go:172] (0xc000608f00) (3) Data frame sent\nI0113 18:54:14.754788    3165 log.go:172] (0xc0008a62c0) Data frame received for 3\nI0113 18:54:14.754802    3165 log.go:172] (0xc000608f00) (3) Data frame handling\nI0113 18:54:14.755051    3165 log.go:172] (0xc0008a62c0) Data frame received for 5\nI0113 18:54:14.755095    3165 log.go:172] (0xc000609040) (5) Data frame handling\nI0113 18:54:14.757573    3165 log.go:172] (0xc0008a62c0) Data frame received for 1\nI0113 18:54:14.757609    3165 log.go:172] (0xc000774640) (1) Data frame handling\nI0113 18:54:14.757641    3165 log.go:172] (0xc000774640) (1) Data frame sent\nI0113 18:54:14.757675    3165 log.go:172] (0xc0008a62c0) (0xc000774640) Stream removed, broadcasting: 1\nI0113 18:54:14.757705    3165 log.go:172] (0xc0008a62c0) Go away received\nI0113 18:54:14.758000    3165 log.go:172] (0xc0008a62c0) (0xc000774640) Stream removed, broadcasting: 1\nI0113 18:54:14.758025    3165 log.go:172] (0xc0008a62c0) (0xc000608f00) Stream removed, broadcasting: 3\nI0113 18:54:14.758038    3165 log.go:172] (0xc0008a62c0) (0xc000609040) Stream removed, broadcasting: 5\n"
Jan 13 18:54:14.763: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 13 18:54:14.763: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 13 18:54:14.763: INFO: Waiting for statefulset status.replicas updated to 0
Jan 13 18:54:14.766: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 13 18:54:24.774: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 13 18:54:24.774: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 13 18:54:24.774: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 13 18:54:24.786: INFO: POD   NODE                  PHASE    GRACE  CONDITIONS
Jan 13 18:54:24.786: INFO: ss-0  hunter-control-plane  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  }]
Jan 13 18:54:24.786: INFO: ss-1  hunter-control-plane  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  }]
Jan 13 18:54:24.786: INFO: ss-2  hunter-control-plane  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  }]
Jan 13 18:54:24.786: INFO: 
Jan 13 18:54:24.787: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 13 18:54:25.791: INFO: POD   NODE                  PHASE    GRACE  CONDITIONS
Jan 13 18:54:25.791: INFO: ss-0  hunter-control-plane  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  }]
Jan 13 18:54:25.791: INFO: ss-1  hunter-control-plane  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  }]
Jan 13 18:54:25.791: INFO: ss-2  hunter-control-plane  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  }]
Jan 13 18:54:25.791: INFO: 
Jan 13 18:54:25.791: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 13 18:54:26.807: INFO: POD   NODE                  PHASE    GRACE  CONDITIONS
Jan 13 18:54:26.807: INFO: ss-0  hunter-control-plane  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  }]
Jan 13 18:54:26.807: INFO: ss-1  hunter-control-plane  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  }]
Jan 13 18:54:26.807: INFO: ss-2  hunter-control-plane  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  }]
Jan 13 18:54:26.807: INFO: 
Jan 13 18:54:26.807: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 13 18:54:27.812: INFO: POD   NODE                  PHASE    GRACE  CONDITIONS
Jan 13 18:54:27.812: INFO: ss-0  hunter-control-plane  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  }]
Jan 13 18:54:27.812: INFO: ss-1  hunter-control-plane  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  }]
Jan 13 18:54:27.812: INFO: ss-2  hunter-control-plane  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  }]
Jan 13 18:54:27.812: INFO: 
Jan 13 18:54:27.812: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 13 18:54:28.818: INFO: POD   NODE                  PHASE    GRACE  CONDITIONS
Jan 13 18:54:28.818: INFO: ss-0  hunter-control-plane  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  }]
Jan 13 18:54:28.818: INFO: ss-1  hunter-control-plane  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  }]
Jan 13 18:54:28.818: INFO: ss-2  hunter-control-plane  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  }]
Jan 13 18:54:28.818: INFO: 
Jan 13 18:54:28.818: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 13 18:54:29.822: INFO: POD   NODE                  PHASE    GRACE  CONDITIONS
Jan 13 18:54:29.822: INFO: ss-0  hunter-control-plane  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  }]
Jan 13 18:54:29.822: INFO: ss-1  hunter-control-plane  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  }]
Jan 13 18:54:29.822: INFO: ss-2  hunter-control-plane  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  }]
Jan 13 18:54:29.822: INFO: 
Jan 13 18:54:29.822: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 13 18:54:30.826: INFO: POD   NODE                  PHASE    GRACE  CONDITIONS
Jan 13 18:54:30.826: INFO: ss-0  hunter-control-plane  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  }]
Jan 13 18:54:30.826: INFO: ss-1  hunter-control-plane  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  }]
Jan 13 18:54:30.826: INFO: ss-2  hunter-control-plane  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  }]
Jan 13 18:54:30.826: INFO: 
Jan 13 18:54:30.826: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 13 18:54:31.830: INFO: POD   NODE                  PHASE    GRACE  CONDITIONS
Jan 13 18:54:31.831: INFO: ss-0  hunter-control-plane  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  }]
Jan 13 18:54:31.831: INFO: ss-1  hunter-control-plane  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  }]
Jan 13 18:54:31.831: INFO: ss-2  hunter-control-plane  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  }]
Jan 13 18:54:31.831: INFO: 
Jan 13 18:54:31.831: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 13 18:54:32.836: INFO: POD   NODE                  PHASE    GRACE  CONDITIONS
Jan 13 18:54:32.836: INFO: ss-0  hunter-control-plane  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  }]
Jan 13 18:54:32.836: INFO: ss-1  hunter-control-plane  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  }]
Jan 13 18:54:32.836: INFO: ss-2  hunter-control-plane  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  }]
Jan 13 18:54:32.836: INFO: 
Jan 13 18:54:32.836: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 13 18:54:33.841: INFO: POD   NODE                  PHASE    GRACE  CONDITIONS
Jan 13 18:54:33.841: INFO: ss-0  hunter-control-plane  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:33 +0000 UTC  }]
Jan 13 18:54:33.841: INFO: ss-1  hunter-control-plane  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  }]
Jan 13 18:54:33.841: INFO: ss-2  hunter-control-plane  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:54:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 18:53:53 +0000 UTC  }]
Jan 13 18:54:33.841: INFO: 
Jan 13 18:54:33.841: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-l7hwf
Jan 13 18:54:34.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:54:34.971: INFO: rc: 1
Jan 13 18:54:34.971: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001973890 exit status 1   true [0xc000a38878 0xc000a38890 0xc000a388a8] [0xc000a38878 0xc000a38890 0xc000a388a8] [0xc000a38888 0xc000a388a0] [0x935700 0x935700] 0xc0016ed8c0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan 13 18:54:44.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:54:45.065: INFO: rc: 1
Jan 13 18:54:45.065: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0023fcea0 exit status 1   true [0xc0025f6690 0xc0025f66a8 0xc0025f66c0] [0xc0025f6690 0xc0025f66a8 0xc0025f66c0] [0xc0025f66a0 0xc0025f66b8] [0x935700 0x935700] 0xc00148aba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:54:55.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:54:55.161: INFO: rc: 1
Jan 13 18:54:55.161: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0023fcff0 exit status 1   true [0xc0025f66c8 0xc0025f66e0 0xc0025f66f8] [0xc0025f66c8 0xc0025f66e0 0xc0025f66f8] [0xc0025f66d8 0xc0025f66f0] [0x935700 0x935700] 0xc00148ae40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:55:05.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:55:05.259: INFO: rc: 1
Jan 13 18:55:05.259: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0023fd1a0 exit status 1   true [0xc0025f6700 0xc0025f6718 0xc0025f6730] [0xc0025f6700 0xc0025f6718 0xc0025f6730] [0xc0025f6710 0xc0025f6728] [0x935700 0x935700] 0xc00148b140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:55:15.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:55:15.351: INFO: rc: 1
Jan 13 18:55:15.351: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a368d0 exit status 1   true [0xc0018da168 0xc0018da180 0xc0018da198] [0xc0018da168 0xc0018da180 0xc0018da198] [0xc0018da178 0xc0018da190] [0x935700 0x935700] 0xc001902ba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:55:25.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:55:25.440: INFO: rc: 1
Jan 13 18:55:25.440: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002bb68d0 exit status 1   true [0xc000d56288 0xc000d562a0 0xc000d562b8] [0xc000d56288 0xc000d562a0 0xc000d562b8] [0xc000d56298 0xc000d562b0] [0x935700 0x935700] 0xc001f4dce0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:55:35.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:55:35.537: INFO: rc: 1
Jan 13 18:55:35.537: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ad6120 exit status 1   true [0xc000432c98 0xc000432e20 0xc000432f28] [0xc000432c98 0xc000432e20 0xc000432f28] [0xc000432de0 0xc000432ee0] [0x935700 0x935700] 0xc00202e600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:55:45.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:55:45.629: INFO: rc: 1
Jan 13 18:55:45.629: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ad6270 exit status 1   true [0xc000432f38 0xc000432f90 0xc000432fd8] [0xc000432f38 0xc000432f90 0xc000432fd8] [0xc000432f78 0xc000432fc8] [0x935700 0x935700] 0xc00202ed20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:55:55.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:55:55.724: INFO: rc: 1
Jan 13 18:55:55.725: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ad6390 exit status 1   true [0xc000432fe0 0xc000433050 0xc0004330c8] [0xc000432fe0 0xc000433050 0xc0004330c8] [0xc000433020 0xc0004330a8] [0x935700 0x935700] 0xc00202f4a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:56:05.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:56:05.816: INFO: rc: 1
Jan 13 18:56:05.816: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ad64b0 exit status 1   true [0xc0004330e8 0xc000433120 0xc000433160] [0xc0004330e8 0xc000433120 0xc000433160] [0xc000433118 0xc000433148] [0x935700 0x935700] 0xc00202f740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:56:15.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:56:15.910: INFO: rc: 1
Jan 13 18:56:15.910: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021da120 exit status 1   true [0xc001714000 0xc001714058 0xc0017140c8] [0xc001714000 0xc001714058 0xc0017140c8] [0xc001714038 0xc001714098] [0x935700 0x935700] 0xc001fae240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:56:25.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:56:25.995: INFO: rc: 1
Jan 13 18:56:25.996: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ad6600 exit status 1   true [0xc000433168 0xc000433278 0xc0004332f0] [0xc000433168 0xc000433278 0xc0004332f0] [0xc000433270 0xc0004332d8] [0x935700 0x935700] 0xc00202fb00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:56:35.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:56:36.079: INFO: rc: 1
Jan 13 18:56:36.079: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021da270 exit status 1   true [0xc0017140d8 0xc001714148 0xc0017141b8] [0xc0017140d8 0xc001714148 0xc0017141b8] [0xc001714130 0xc001714190] [0x935700 0x935700] 0xc001fae540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:56:46.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:56:46.165: INFO: rc: 1
Jan 13 18:56:46.165: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002326150 exit status 1   true [0xc00016e000 0xc0009e2080 0xc0009e2250] [0xc00016e000 0xc0009e2080 0xc0009e2250] [0xc0009e2060 0xc0009e21b0] [0x935700 0x935700] 0xc001191e00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:56:56.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:56:56.246: INFO: rc: 1
Jan 13 18:56:56.246: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002326270 exit status 1   true [0xc0009e2270 0xc0009e2368 0xc0009e2520] [0xc0009e2270 0xc0009e2368 0xc0009e2520] [0xc0009e2350 0xc0009e2500] [0x935700 0x935700] 0xc00182d980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:57:06.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:57:06.335: INFO: rc: 1
Jan 13 18:57:06.335: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ad67b0 exit status 1   true [0xc0004332f8 0xc0004334f8 0xc0004335d0] [0xc0004332f8 0xc0004334f8 0xc0004335d0] [0xc0004333b0 0xc000433588] [0x935700 0x935700] 0xc00202ff20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:57:16.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:57:16.442: INFO: rc: 1
Jan 13 18:57:16.442: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ad68d0 exit status 1   true [0xc000433638 0xc000433698 0xc000433718] [0xc000433638 0xc000433698 0xc000433718] [0xc000433690 0xc0004336e0] [0x935700 0x935700] 0xc00190ec00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:57:26.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:57:26.525: INFO: rc: 1
Jan 13 18:57:26.525: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021da4b0 exit status 1   true [0xc0017141f8 0xc001714268 0xc0017142b0] [0xc0017141f8 0xc001714268 0xc0017142b0] [0xc001714258 0xc001714298] [0x935700 0x935700] 0xc001faf2c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:57:36.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:57:36.622: INFO: rc: 1
Jan 13 18:57:36.622: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002326120 exit status 1   true [0xc00016e000 0xc000432de0 0xc000432ee0] [0xc00016e000 0xc000432de0 0xc000432ee0] [0xc000432dc0 0xc000432e78] [0x935700 0x935700] 0xc00182d7a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:57:46.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:57:46.725: INFO: rc: 1
Jan 13 18:57:46.725: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021da150 exit status 1   true [0xc0009e2028 0xc0009e2170 0xc0009e2270] [0xc0009e2028 0xc0009e2170 0xc0009e2270] [0xc0009e2080 0xc0009e2250] [0x935700 0x935700] 0xc0016db860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:57:56.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:57:56.826: INFO: rc: 1
Jan 13 18:57:56.826: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0023262d0 exit status 1   true [0xc000432f28 0xc000432f78 0xc000432fc8] [0xc000432f28 0xc000432f78 0xc000432fc8] [0xc000432f58 0xc000432fc0] [0x935700 0x935700] 0xc00190e9c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:58:06.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:58:06.918: INFO: rc: 1
Jan 13 18:58:06.918: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0023263f0 exit status 1   true [0xc000432fd8 0xc000433020 0xc0004330a8] [0xc000432fd8 0xc000433020 0xc0004330a8] [0xc000432ff8 0xc0004330a0] [0x935700 0x935700] 0xc00190fd40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:58:16.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:58:17.007: INFO: rc: 1
Jan 13 18:58:17.007: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002326510 exit status 1   true [0xc0004330c8 0xc000433118 0xc000433148] [0xc0004330c8 0xc000433118 0xc000433148] [0xc000433100 0xc000433138] [0x935700 0x935700] 0xc001fae180 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:58:27.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:58:27.104: INFO: rc: 1
Jan 13 18:58:27.104: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011d6150 exit status 1   true [0xc001714000 0xc001714058 0xc0017140c8] [0xc001714000 0xc001714058 0xc0017140c8] [0xc001714038 0xc001714098] [0x935700 0x935700] 0xc0019ea240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:58:37.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:58:38.449: INFO: rc: 1
Jan 13 18:58:38.449: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0023266c0 exit status 1   true [0xc000433160 0xc000433270 0xc0004332d8] [0xc000433160 0xc000433270 0xc0004332d8] [0xc000433190 0xc0004332a8] [0x935700 0x935700] 0xc001fae480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:58:48.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:58:48.543: INFO: rc: 1
Jan 13 18:58:48.543: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011d62a0 exit status 1   true [0xc0017140d8 0xc001714148 0xc0017141b8] [0xc0017140d8 0xc001714148 0xc0017141b8] [0xc001714130 0xc001714190] [0x935700 0x935700] 0xc0019eb500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:58:58.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:58:58.628: INFO: rc: 1
Jan 13 18:58:58.628: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002326810 exit status 1   true [0xc0004332f0 0xc0004333b0 0xc000433588] [0xc0004332f0 0xc0004333b0 0xc000433588] [0xc000433300 0xc000433580] [0x935700 0x935700] 0xc001faf560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:59:08.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:59:08.713: INFO: rc: 1
Jan 13 18:59:08.713: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011d63c0 exit status 1   true [0xc0017142c0 0xc001714328 0xc0017143d0] [0xc0017142c0 0xc001714328 0xc0017143d0] [0xc001714308 0xc0017143a0] [0x935700 0x935700] 0xc0019eb9e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:59:18.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:59:18.807: INFO: rc: 1
Jan 13 18:59:18.808: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021da5d0 exit status 1   true [0xc0009e2288 0xc0009e2430 0xc0009e25d8] [0xc0009e2288 0xc0009e2430 0xc0009e25d8] [0xc0009e2368 0xc0009e2520] [0x935700 0x935700] 0xc00202e780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:59:28.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:59:28.893: INFO: rc: 1
Jan 13 18:59:28.894: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ad6120 exit status 1   true [0xc000432c98 0xc000432e20 0xc000432f28] [0xc000432c98 0xc000432e20 0xc000432f28] [0xc000432de0 0xc000432ee0] [0x935700 0x935700] 0xc001191e00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 13 18:59:38.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l7hwf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 13 18:59:38.994: INFO: rc: 1
Jan 13 18:59:38.994: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Jan 13 18:59:38.994: INFO: Scaling statefulset ss to 0
Jan 13 18:59:39.002: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 13 18:59:39.004: INFO: Deleting all statefulset in ns e2e-tests-statefulset-l7hwf
Jan 13 18:59:39.007: INFO: Scaling statefulset ss to 0
Jan 13 18:59:39.015: INFO: Waiting for statefulset status.replicas updated to 0
Jan 13 18:59:39.021: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:59:39.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-l7hwf" for this suite.
Jan 13 18:59:45.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 18:59:45.125: INFO: namespace: e2e-tests-statefulset-l7hwf, resource: bindings, ignored listing per whitelist
Jan 13 18:59:45.238: INFO: namespace e2e-tests-statefulset-l7hwf deletion completed in 6.190223712s

• [SLOW TEST:372.407 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 18:59:45.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-804e0092-55d1-11eb-8355-0242ac110009
Jan 13 18:59:45.421: INFO: Pod name my-hostname-basic-804e0092-55d1-11eb-8355-0242ac110009: Found 0 pods out of 1
Jan 13 18:59:50.426: INFO: Pod name my-hostname-basic-804e0092-55d1-11eb-8355-0242ac110009: Found 1 pods out of 1
Jan 13 18:59:50.426: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-804e0092-55d1-11eb-8355-0242ac110009" are running
Jan 13 18:59:50.430: INFO: Pod "my-hostname-basic-804e0092-55d1-11eb-8355-0242ac110009-qqqnd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-13 18:59:45 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-13 18:59:48 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-13 18:59:48 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-13 18:59:45 +0000 UTC Reason: Message:}])
Jan 13 18:59:50.430: INFO: Trying to dial the pod
Jan 13 18:59:55.441: INFO: Controller my-hostname-basic-804e0092-55d1-11eb-8355-0242ac110009: Got expected result from replica 1 [my-hostname-basic-804e0092-55d1-11eb-8355-0242ac110009-qqqnd]: "my-hostname-basic-804e0092-55d1-11eb-8355-0242ac110009-qqqnd", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 18:59:55.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-gk6dm" for this suite.
Jan 13 19:00:01.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 19:00:01.550: INFO: namespace: e2e-tests-replication-controller-gk6dm, resource: bindings, ignored listing per whitelist
Jan 13 19:00:01.557: INFO: namespace e2e-tests-replication-controller-gk6dm deletion completed in 6.112338716s

• [SLOW TEST:16.318 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 19:00:01.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0113 19:00:41.789472       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 13 19:00:41.789: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 19:00:41.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-p5bp2" for this suite.
Jan 13 19:00:51.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 19:00:51.929: INFO: namespace: e2e-tests-gc-p5bp2, resource: bindings, ignored listing per whitelist
Jan 13 19:00:51.938: INFO: namespace e2e-tests-gc-p5bp2 deletion completed in 10.145690438s

• [SLOW TEST:50.381 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 19:00:51.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jan 13 19:00:52.235: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix025057660/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 19:00:52.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-m92qm" for this suite.
Jan 13 19:00:58.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 19:00:58.518: INFO: namespace: e2e-tests-kubectl-m92qm, resource: bindings, ignored listing per whitelist
Jan 13 19:00:58.605: INFO: namespace e2e-tests-kubectl-m92qm deletion completed in 6.158439925s

• [SLOW TEST:6.667 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 19:00:58.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-zdjnw
Jan 13 19:01:02.763: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-zdjnw
STEP: checking the pod's current state and verifying that restartCount is present
Jan 13 19:01:02.767: INFO: Initial restart count of pod liveness-exec is 0
Jan 13 19:01:52.869: INFO: Restart count of pod e2e-tests-container-probe-zdjnw/liveness-exec is now 1 (50.102167372s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 19:01:52.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-zdjnw" for this suite.
Jan 13 19:01:58.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 19:01:58.978: INFO: namespace: e2e-tests-container-probe-zdjnw, resource: bindings, ignored listing per whitelist
Jan 13 19:01:59.009: INFO: namespace e2e-tests-container-probe-zdjnw deletion completed in 6.124144616s

• [SLOW TEST:60.403 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 19:01:59.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-597kd in namespace e2e-tests-proxy-65xnn
I0113 19:01:59.236531       6 runners.go:184] Created replication controller with name: proxy-service-597kd, namespace: e2e-tests-proxy-65xnn, replica count: 1
I0113 19:02:00.286969       6 runners.go:184] proxy-service-597kd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0113 19:02:01.287216       6 runners.go:184] proxy-service-597kd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0113 19:02:02.287452       6 runners.go:184] proxy-service-597kd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0113 19:02:03.287714       6 runners.go:184] proxy-service-597kd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0113 19:02:04.287941       6 runners.go:184] proxy-service-597kd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0113 19:02:05.288145       6 runners.go:184] proxy-service-597kd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0113 19:02:06.288371       6 runners.go:184] proxy-service-597kd Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 13 19:02:06.291: INFO: setup took 7.161687924s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 13 19:02:06.297: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-65xnn/pods/proxy-service-597kd-bc6l2:160/proxy/: foo (200; 6.024249ms)
Jan 13 19:02:06.297: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-65xnn/pods/proxy-service-597kd-bc6l2:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-dfa34fd7-55d1-11eb-8355-0242ac110009
STEP: Creating a pod to test consume secrets
Jan 13 19:02:25.356: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dfa3c049-55d1-11eb-8355-0242ac110009" in namespace "e2e-tests-projected-n548h" to be "success or failure"
Jan 13 19:02:25.375: INFO: Pod "pod-projected-secrets-dfa3c049-55d1-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 18.848195ms
Jan 13 19:02:27.386: INFO: Pod "pod-projected-secrets-dfa3c049-55d1-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030086613s
Jan 13 19:02:29.390: INFO: Pod "pod-projected-secrets-dfa3c049-55d1-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034725861s
STEP: Saw pod success
Jan 13 19:02:29.391: INFO: Pod "pod-projected-secrets-dfa3c049-55d1-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 19:02:29.394: INFO: Trying to get logs from node hunter-control-plane pod pod-projected-secrets-dfa3c049-55d1-11eb-8355-0242ac110009 container projected-secret-volume-test: 
STEP: delete the pod
Jan 13 19:02:29.438: INFO: Waiting for pod pod-projected-secrets-dfa3c049-55d1-11eb-8355-0242ac110009 to disappear
Jan 13 19:02:29.454: INFO: Pod pod-projected-secrets-dfa3c049-55d1-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 19:02:29.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-n548h" for this suite.
Jan 13 19:02:35.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 19:02:35.527: INFO: namespace: e2e-tests-projected-n548h, resource: bindings, ignored listing per whitelist
Jan 13 19:02:35.576: INFO: namespace e2e-tests-projected-n548h deletion completed in 6.118926658s

• [SLOW TEST:10.349 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 19:02:35.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-fhmn
STEP: Creating a pod to test atomic-volume-subpath
Jan 13 19:02:35.769: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-fhmn" in namespace "e2e-tests-subpath-njj9k" to be "success or failure"
Jan 13 19:02:35.773: INFO: Pod "pod-subpath-test-downwardapi-fhmn": Phase="Pending", Reason="", readiness=false. Elapsed: 3.337131ms
Jan 13 19:02:37.777: INFO: Pod "pod-subpath-test-downwardapi-fhmn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007538533s
Jan 13 19:02:39.780: INFO: Pod "pod-subpath-test-downwardapi-fhmn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011230495s
Jan 13 19:02:41.785: INFO: Pod "pod-subpath-test-downwardapi-fhmn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015574662s
Jan 13 19:02:43.788: INFO: Pod "pod-subpath-test-downwardapi-fhmn": Phase="Running", Reason="", readiness=false. Elapsed: 8.018910726s
Jan 13 19:02:45.793: INFO: Pod "pod-subpath-test-downwardapi-fhmn": Phase="Running", Reason="", readiness=false. Elapsed: 10.023328547s
Jan 13 19:02:47.797: INFO: Pod "pod-subpath-test-downwardapi-fhmn": Phase="Running", Reason="", readiness=false. Elapsed: 12.027721219s
Jan 13 19:02:49.801: INFO: Pod "pod-subpath-test-downwardapi-fhmn": Phase="Running", Reason="", readiness=false. Elapsed: 14.032228215s
Jan 13 19:02:51.805: INFO: Pod "pod-subpath-test-downwardapi-fhmn": Phase="Running", Reason="", readiness=false. Elapsed: 16.036121003s
Jan 13 19:02:53.810: INFO: Pod "pod-subpath-test-downwardapi-fhmn": Phase="Running", Reason="", readiness=false. Elapsed: 18.040333861s
Jan 13 19:02:57.114: INFO: Pod "pod-subpath-test-downwardapi-fhmn": Phase="Running", Reason="", readiness=false. Elapsed: 21.345148897s
Jan 13 19:02:59.118: INFO: Pod "pod-subpath-test-downwardapi-fhmn": Phase="Running", Reason="", readiness=false. Elapsed: 23.34916179s
Jan 13 19:03:01.122: INFO: Pod "pod-subpath-test-downwardapi-fhmn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.353180404s
STEP: Saw pod success
Jan 13 19:03:01.122: INFO: Pod "pod-subpath-test-downwardapi-fhmn" satisfied condition "success or failure"
Jan 13 19:03:01.126: INFO: Trying to get logs from node hunter-control-plane pod pod-subpath-test-downwardapi-fhmn container test-container-subpath-downwardapi-fhmn: 
STEP: delete the pod
Jan 13 19:03:01.386: INFO: Waiting for pod pod-subpath-test-downwardapi-fhmn to disappear
Jan 13 19:03:01.416: INFO: Pod pod-subpath-test-downwardapi-fhmn no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-fhmn
Jan 13 19:03:01.416: INFO: Deleting pod "pod-subpath-test-downwardapi-fhmn" in namespace "e2e-tests-subpath-njj9k"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 19:03:01.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-njj9k" for this suite.
Jan 13 19:03:07.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 19:03:07.520: INFO: namespace: e2e-tests-subpath-njj9k, resource: bindings, ignored listing per whitelist
Jan 13 19:03:07.570: INFO: namespace e2e-tests-subpath-njj9k deletion completed in 6.148894151s

• [SLOW TEST:31.994 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 19:03:07.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 13 19:03:07.691: INFO: Waiting up to 5m0s for pod "pod-f8de1c33-55d1-11eb-8355-0242ac110009" in namespace "e2e-tests-emptydir-ql5j2" to be "success or failure"
Jan 13 19:03:07.695: INFO: Pod "pod-f8de1c33-55d1-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.594398ms
Jan 13 19:03:09.794: INFO: Pod "pod-f8de1c33-55d1-11eb-8355-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103156019s
Jan 13 19:03:11.798: INFO: Pod "pod-f8de1c33-55d1-11eb-8355-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107183122s
STEP: Saw pod success
Jan 13 19:03:11.798: INFO: Pod "pod-f8de1c33-55d1-11eb-8355-0242ac110009" satisfied condition "success or failure"
Jan 13 19:03:11.802: INFO: Trying to get logs from node hunter-control-plane pod pod-f8de1c33-55d1-11eb-8355-0242ac110009 container test-container: 
STEP: delete the pod
Jan 13 19:03:11.869: INFO: Waiting for pod pod-f8de1c33-55d1-11eb-8355-0242ac110009 to disappear
Jan 13 19:03:11.881: INFO: Pod pod-f8de1c33-55d1-11eb-8355-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 19:03:11.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ql5j2" for this suite.
Jan 13 19:03:17.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 19:03:17.958: INFO: namespace: e2e-tests-emptydir-ql5j2, resource: bindings, ignored listing per whitelist
Jan 13 19:03:18.011: INFO: namespace e2e-tests-emptydir-ql5j2 deletion completed in 6.126710389s

• [SLOW TEST:10.440 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 19:03:18.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jan 13 19:03:18.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-gw6mv run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 13 19:03:24.078: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0113 19:03:24.006248    3873 log.go:172] (0xc00076e160) (0xc0007db540) Create stream\nI0113 19:03:24.006278    3873 log.go:172] (0xc00076e160) (0xc0007db540) Stream added, broadcasting: 1\nI0113 19:03:24.009317    3873 log.go:172] (0xc00076e160) Reply frame received for 1\nI0113 19:03:24.009370    3873 log.go:172] (0xc00076e160) (0xc0007db5e0) Create stream\nI0113 19:03:24.009383    3873 log.go:172] (0xc00076e160) (0xc0007db5e0) Stream added, broadcasting: 3\nI0113 19:03:24.010396    3873 log.go:172] (0xc00076e160) Reply frame received for 3\nI0113 19:03:24.010450    3873 log.go:172] (0xc00076e160) (0xc0007320a0) Create stream\nI0113 19:03:24.010462    3873 log.go:172] (0xc00076e160) (0xc0007320a0) Stream added, broadcasting: 5\nI0113 19:03:24.011230    3873 log.go:172] (0xc00076e160) Reply frame received for 5\nI0113 19:03:24.011262    3873 log.go:172] (0xc00076e160) (0xc0007db680) Create stream\nI0113 19:03:24.011272    3873 log.go:172] (0xc00076e160) (0xc0007db680) Stream added, broadcasting: 7\nI0113 19:03:24.012091    3873 log.go:172] (0xc00076e160) Reply frame received for 7\nI0113 19:03:24.012223    3873 log.go:172] (0xc0007db5e0) (3) Writing data frame\nI0113 19:03:24.012320    3873 log.go:172] (0xc0007db5e0) (3) Writing data frame\nI0113 19:03:24.013324    3873 log.go:172] (0xc00076e160) Data frame received for 5\nI0113 19:03:24.013345    3873 log.go:172] (0xc0007320a0) (5) Data frame handling\nI0113 19:03:24.013355    3873 log.go:172] (0xc0007320a0) (5) Data frame sent\nI0113 19:03:24.013901    3873 log.go:172] (0xc00076e160) Data frame received for 5\nI0113 19:03:24.013916    3873 log.go:172] (0xc0007320a0) (5) Data frame handling\nI0113 19:03:24.013924    3873 log.go:172] (0xc0007320a0) (5) Data frame sent\nI0113 19:03:24.045219    3873 log.go:172] (0xc00076e160) Data frame received for 7\nI0113 19:03:24.045245    3873 log.go:172] (0xc0007db680) (7) Data frame handling\nI0113 19:03:24.045300    3873 log.go:172] (0xc00076e160) Data frame received for 5\nI0113 19:03:24.045338    3873 log.go:172] (0xc0007320a0) (5) Data frame handling\nI0113 19:03:24.045424    3873 log.go:172] (0xc00076e160) Data frame received for 1\nI0113 19:03:24.045445    3873 log.go:172] (0xc0007db540) (1) Data frame handling\nI0113 19:03:24.045463    3873 log.go:172] (0xc0007db540) (1) Data frame sent\nI0113 19:03:24.046061    3873 log.go:172] (0xc00076e160) (0xc0007db540) Stream removed, broadcasting: 1\nI0113 19:03:24.046172    3873 log.go:172] (0xc00076e160) (0xc0007db5e0) Stream removed, broadcasting: 3\nI0113 19:03:24.046208    3873 log.go:172] (0xc00076e160) Go away received\nI0113 19:03:24.046231    3873 log.go:172] (0xc00076e160) (0xc0007db540) Stream removed, broadcasting: 1\nI0113 19:03:24.046269    3873 log.go:172] (0xc00076e160) (0xc0007db5e0) Stream removed, broadcasting: 3\nI0113 19:03:24.046282    3873 log.go:172] (0xc00076e160) (0xc0007320a0) Stream removed, broadcasting: 5\nI0113 19:03:24.046299    3873 log.go:172] (0xc00076e160) (0xc0007db680) Stream removed, broadcasting: 7\n"
Jan 13 19:03:24.078: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 19:03:26.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-gw6mv" for this suite.
Jan 13 19:03:32.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 19:03:32.151: INFO: namespace: e2e-tests-kubectl-gw6mv, resource: bindings, ignored listing per whitelist
Jan 13 19:03:32.198: INFO: namespace e2e-tests-kubectl-gw6mv deletion completed in 6.110080819s

• [SLOW TEST:14.187 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 13 19:03:32.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-knlt
STEP: Creating a pod to test atomic-volume-subpath
Jan 13 19:03:32.308: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-knlt" in namespace "e2e-tests-subpath-6z2cc" to be "success or failure"
Jan 13 19:03:32.328: INFO: Pod "pod-subpath-test-configmap-knlt": Phase="Pending", Reason="", readiness=false. Elapsed: 20.149423ms
Jan 13 19:03:34.332: INFO: Pod "pod-subpath-test-configmap-knlt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02391413s
Jan 13 19:03:36.336: INFO: Pod "pod-subpath-test-configmap-knlt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028138702s
Jan 13 19:03:38.396: INFO: Pod "pod-subpath-test-configmap-knlt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088072948s
Jan 13 19:03:40.400: INFO: Pod "pod-subpath-test-configmap-knlt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09227473s
Jan 13 19:03:42.405: INFO: Pod "pod-subpath-test-configmap-knlt": Phase="Running", Reason="", readiness=false. Elapsed: 10.097302829s
Jan 13 19:03:44.424: INFO: Pod "pod-subpath-test-configmap-knlt": Phase="Running", Reason="", readiness=false. Elapsed: 12.115839415s
Jan 13 19:03:46.441: INFO: Pod "pod-subpath-test-configmap-knlt": Phase="Running", Reason="", readiness=false. Elapsed: 14.132527306s
Jan 13 19:03:48.445: INFO: Pod "pod-subpath-test-configmap-knlt": Phase="Running", Reason="", readiness=false. Elapsed: 16.136518579s
Jan 13 19:03:50.448: INFO: Pod "pod-subpath-test-configmap-knlt": Phase="Running", Reason="", readiness=false. Elapsed: 18.140144366s
Jan 13 19:03:52.452: INFO: Pod "pod-subpath-test-configmap-knlt": Phase="Running", Reason="", readiness=false. Elapsed: 20.144295644s
Jan 13 19:03:54.457: INFO: Pod "pod-subpath-test-configmap-knlt": Phase="Running", Reason="", readiness=false. Elapsed: 22.148644149s
Jan 13 19:03:56.460: INFO: Pod "pod-subpath-test-configmap-knlt": Phase="Running", Reason="", readiness=false. Elapsed: 24.152252012s
Jan 13 19:03:58.464: INFO: Pod "pod-subpath-test-configmap-knlt": Phase="Running", Reason="", readiness=false. Elapsed: 26.156282453s
Jan 13 19:04:00.468: INFO: Pod "pod-subpath-test-configmap-knlt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.160068348s
STEP: Saw pod success
Jan 13 19:04:00.468: INFO: Pod "pod-subpath-test-configmap-knlt" satisfied condition "success or failure"
Jan 13 19:04:00.471: INFO: Trying to get logs from node hunter-control-plane pod pod-subpath-test-configmap-knlt container test-container-subpath-configmap-knlt: 
STEP: delete the pod
Jan 13 19:04:00.488: INFO: Waiting for pod pod-subpath-test-configmap-knlt to disappear
Jan 13 19:04:00.575: INFO: Pod pod-subpath-test-configmap-knlt no longer exists
STEP: Deleting pod pod-subpath-test-configmap-knlt
Jan 13 19:04:00.575: INFO: Deleting pod "pod-subpath-test-configmap-knlt" in namespace "e2e-tests-subpath-6z2cc"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 13 19:04:00.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-6z2cc" for this suite.
Jan 13 19:04:06.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 13 19:04:06.612: INFO: namespace: e2e-tests-subpath-6z2cc, resource: bindings, ignored listing per whitelist
Jan 13 19:04:06.736: INFO: namespace e2e-tests-subpath-6z2cc deletion completed in 6.146987981s

• [SLOW TEST:34.538 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSJan 13 19:04:06.737: INFO: Running AfterSuite actions on all nodes
Jan 13 19:04:06.737: INFO: Running AfterSuite actions on node 1
Jan 13 19:04:06.737: INFO: Skipping dumping logs from cluster

Ran 200 of 2164 Specs in 6330.102 seconds
SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS